#### Abstract

The aim of this paper is to present a fault detection algorithm (FDI) based on signal processing techniques developed for an inertial measurement unit (IMU) with minimal redundancy of fiber optic gyros. In this work the recursive median filter is applied in order to remove impulses (outliers) arising from data acquisition process and parity vector operations, improving the fault detection and isolation performance. The FDI algorithm is divided into two blocks: fault detection (FD) and fault isolation (FI). The FD part of the algorithm is used to guarantee the reliability of the isolation part and is based on parity vector analysis using -CUSUM algorithm. The FI part is performed using parity space projection of the energy subbands obtained from wavelet packet decomposition. This projection is an extension of clustering analysis based on singular value decomposition (SVD) and principal component analysis (PCA). The results of the FD and FI algorithms have shown the effectiveness of the proposed method, in which the FD algorithm is capable of indicating the low-level step bias fault with short delay and a high index of correct decisions of the FI algorithm also with low-level step bias fault.

#### 1. Introduction

The main goal of fault detection, isolation, and recovery (FDIR) is to effectively detect faults and accurately isolate them to a failed component in the shortest time possible. This capability leads to reduction in diagnostic time or downtime in general, increasing the system availability. A good inherent diagnostic of a system also enhances the self-confidence in operating system, the main aspect of mission success. FDIR is especially required to an on-orbit system where maintenance may be difficult, very expensive, or even impossible. The spacecraft AOCS (Attitude and Orbit Control Subsystem) includes components such as sensors and actuators. Among the attitude control devices, the gyros are widely used for the sake of attitude control, mainly when high-accuracy requirements are imposed on the AOCS. This work belongs to this area and is addressed in the scenario of the orbital dynamics for spacecraft AOCS including inertial measurement unities (IMU) and FDIR software. This paper does not cover the recovery problem associated with FDIR but only with the FDI, that is, the fault detection and isolation problem. The methodology is based on signal processing techniques developed for an inertial measurement unit (IMU) with minimal redundancy of fiber optic gyros.

In the FDI problem with minimal redundancy, we are limited only to detect faults, generally with the aid of the parity vector analysis [1–6]. For minimal redundancy, we mean the use of four inertial sensors (gyros or accelerometers) to measure a tridimensional space where, at least, one of the sensors should be skewed [7]. The identification of the faulty sensor in the minimal redundancy case is usually performed by signal processing techniques [8]. In the last years, the increase of the wavelets applications for fault (or variation) detection [8–11] and classification [12] has demonstrated the potential of the wavelet for IMU status monitoring. In addition, actual digital systems, mainly in prototype stage, generate high quantity of impulsive noise (outliers) as a consequence of the data acquisition system and/or parity vector operations [13]. To overcome this problem, removing the outliers without degenerating important variations in the sensor signals, the median filter (MF) is used in the recursive form. This kind of filter, widely used in image processing, is applied here because it preserves the edge transitions without distortions [14] and is a robust estimator [15]. These properties are helpful in the wavelet analyses to preserve important information in the details. Another advantage in the MF use is the increase of the performance in the change detection [13]. In the pattern recognition field, the singular value decomposition (SVD) and the principal component analysis (PCA) are useful tools in the practical approach to detect hidden structure in the form of clustered projections in noisy data and with reduced number of variables [16]. Therefore, this paper is dedicated to develop an algorithm applied to faulty sensor isolation (FI) in IMU constructed with four fiber optic gyros (FOG) in a complementary development of the fault detection algorithm (FD) discussed in detail by de Oliveira [6]. The FD algorithm is based on parity vector analysis processed by -CUSUM algorithm [17] for change detections and is designed in order to guarantee the reliability of the isolation part. The FI part is designed using an approach based on parity space projection of the energy subbands obtained from wavelet packet decomposition. This projection is an extension of clustering analysis based on singular value decomposition (SVD) and principal component analysis (PCA).

This paper is organized as follows. Section 2 discusses the theoretical bases applied in this work composed by geometrical and parity vector (PV) analysis, recursive median filter (MF), -CUSUM algorithm, wavelet packet (WP) decomposition, singular value decomposition (SVD), and principal component analysis (PCA). Section 3 shows the development of the FDI algorithm blocks and their relationships. In that section, the FI model is derived with the parity space projection of the energy subbands obtained by WP decomposition. In Section 4, the results and respective parameters used for FI algorithms are discussed. Section 5 presents an overview of the work and conclusions. The appendix shows the parameters obtained by FI algorithm calibration.

#### 2. Background

The theoretical background and calibration methodology applied for FD algorithm (geometrical and parity vector analyses, recursive median filter, and -CUSUM algorithm) were completely developed in previous work [6], and here they will be omitted. This work is dedicated to design a complete FDI algorithm, and, to reach this goal, it is needed to introduce a new set of mathematical tools to develop the FI algorithm. Therefore, in this section, the theoretical aspects related to geometry, WP, SVD, and PCA will be developed.

##### 2.1. Geometry

The geometrical aspect plays an important role in the inertial measurement units design. Several works [1, 3, 7, 18] address the matter of geometrical analyses, and mainly in the optimal angular relationship between sensors. In this work, a minimal redundancy (four gyros) configuration of fiber optic gyros is used as shown in Figure 1. The nominal sensor matrix (direct cosine matrix) related to this configuration is expressed by (1) and measurement equation by (2):

**(a)**

**(b)**

where is the sensor measurement vector (); is the observation matrix () relating the sensor axes with a triorthogonal axes system; is the state vector () in the tri-orthogonal axes system (); is the Gaussian white noise vector (); and is the bias vector (); is the fault vector (); is the number of sensors.

The parity vector () associated with this configuration is given by [13]

where the matrix is obtained from the null space of , so that

In the absence of biases and faults, the parity vector is a white Gaussian noise vector, and in this configuration (four sensors in a skewed arrangement), it is possible to detect only the fault occurrence.

##### 2.2. Wavelet Packet

The wavelet packet (WP) decomposition is a method based on conventional wavelet transform theory whose difference is the processing of both filter outputs. In this method, the low-pass and high-pass filter outputs are filtered at each level of decomposition in the form of a binary symmetric decomposition tree. This method is useful when a high spectral resolution analysis with low computational cost is needed. The WP decomposition structure is shown in Figure 2. At each level of decomposition, the WP generates blocks with coefficients, where is the decomposition level and is the size of the buffer (number of samples to be processed). The downsampling operator () follows the LP and HP filters at each level in order to eliminate the redundant information [19].

##### 2.3. SVD and PCA

The singular value decomposition (SVD) is a mathematical operation defined in the linear algebra branch, that allows the real or complex matrix factorization. Its application is very useful in signal processing and analyses [4, 16, 20]. The SVD is defined according, Theorem 1 [21].

Theorem 1 (SVD). *Let be a matrix (real or complex) of rank . Therefore, there exist an unitary matrix (orthogonal if is real) with order , an unitary matrix (orthogonal) with order , and a diagonal matrix with order (with positive values), so that,
**
or
**
where, is the conjugate transpose of and is . The values in the main diagonal of are the singular values of . *

The SVD and the eigenvalues and eigenvectors decomposition are related as follows. (i)The left-hand side singular vectors of (columns of ) are the eigenvectors of .(ii)The right-hand side singular vectors of (columns of ) are the eigenvectors of . (iii)The nonnull singular values of are the square root of the non-null eigenvectors of or .

Being the covariance matrix of given by , the singular values of are the square root of the eigenvalues of its covariance matrix.

By using SVD the principal components from a given matrix can be obtained. Mathematically, the PCA is defined as a linear operation that projects a data set into a new coordinate systems, so that the major variance component be associated with the first coordinate, the second major variance component with the second coordinate, and so on [22]. In this way, a set of correlated variables are transformed into another uncorrelated variable set called *principal components*. The principal components can be obtained by decomposition of eigenvalues from covariance matrix or using SVD. The main advantage of use the PCA is the possibility of reducing the dimensionality of the data set space (size ), by selecting the main components from the space of size (), ensuring an optimal orthogonal transformation [22]. Mathematically, PCA can be obtained with the aid of the SVD by defining a set of data () arranged as a matrix according to Table 1, where they are stacked sequences of samples, being the rank of the matrix. In order to calculate the effective direction of the maximum variation, it is extracted the average value from each line as follows:

With data set centered at zero mean, the SVD on the matrix can be performed as follows:

By sorting the data in decreasing order, the major values, and respective eigenvectors, can be selected considering a properly threshold criteria. By defining as a pseudoenergy related to sum of all singular values obtained in the form,

where are the diagonal elements of ; and the cumulative pseudoenergy as a cumulative the sum of the pseudoenergies related to each eigenvector:

a threshold is established in order to define a number of principal components (eigenvectors) that should be selected as follows:

After defining the most significant components, the matrix can be resized considering only the first columns. Then,

where and .

#### 3. FDI Algorithm

The basic structure of the gyros processing with FDI blocks embedded is shown in Figure 3. In that structure, each gyro output is filtered by a recursive median filter (MF-1) in order to remove the outliers (impulsive noise), and in the sequence the gyro measurements are adjusted properly by scale factor (SF) block. After that, the measurements follow two different branches. The first one points towards GN&C (guidance, navigation, and control) block through a generic filter and sensor matrix blocks. As a consequence of the geometry presented in Figure 1, and respective model discussed through (1)-(2), the sensor matrix block performs the transformation from sensors space (tetrahedron) to state space (rotations about , , and ) according to (14), which represents a Least Square Estimation of the state, considering sensor measurements with errors. The second branch points to FDI blocks and it is divided in to two distinct sets. One of them performs the on-line fault detection processing, and the other performs the faulty sensor identification (fault isolation) after a fault is detected. Once the faulty sensor is identified, its influence is removed from the sensor matrix (), and the GN&C starts to operate only with three pieces of FOG information:

##### 3.1. Fault Detection Algorithm

The detection of faults in an IMU with redundant sensors can be performed by PV analysis as defined in (3). Under normal conditions, that is, bias and faults with null values, the PV should present a normal distribution . However, in actual systems, impulsive noise may occur [13]. In the FD block, the recursive MF is used in order to remove impulsive noise and to improve the detection performed by -CUSUM algorithm. In Figure 3 the FD processing structure in the left branch is shown. Under minimal redundancy configuration (four gyros), the FD algorithm can only indicate the fault occurrence. So, this information is used to start the identification process with high confidence of the fault occurrence. After the signal from gyros is processed by PV block according (3), the measurements are filtered by a second recursive MF (MF-2) to remove the spikes (or outliers) generated in the PV processing [6]. The filtered PV is applied to -CUSUM block that performs the fault detection according to the algorithm and calibration presented in previous work [6].

##### 3.2. Fault Isolation Algorithm

The fault isolation (FI) algorithm begins processing the data information stored in the buffers when the fault occurrence is indicated by FD block. The four buffers (one for each gyro) operate in FIFO mode. The operation of the FI algorithm is based on the analysis of the principal components in the energy subbands of the data stored in the buffers. The energy subbands are obtained by using the wavelet packet decomposition up to certain level. The idea behind this approach is to obtain a *signature* of the sensors in normal conditions given in terms of null space projection matrix and then use this matrix as kernel of FI analysis as demonstrated in the sequence.

###### 3.2.1. FI Algorithm Model

The modeling of the FI algorithm begins with WP decomposition of a large amount of data from sensors divided in to pieces with properly defined size. The WP decomposition follows the schema illustrated in Figure 2. Once the appropriated level of decomposition is reached, the respective subband elements are processed according to (16) that represents the average pseudo-energy level at each subband:

where and are the amounts of wavelet coefficients and respective wavelet coefficients vector at given subband.

At each decomposition level , we have subbands. The first subband (superscripts ) is composed by approximation coefficients and represents the sensor state information (low-pass filter output), and this information will not be used in FI processing. The remaining subbands will form the pseudo-energy matrix () because those subbands contain the noise information decomposed in several spectral bands. As a consequence, after processing each buffer, the number of useful subbands is given by , and processing buffers we can form the pseudo-energy matrix according to (17), whose dimension is :

From wavelet theory, the detailed coefficients are like noise with zero mean, and perturbations with rich spectral content will introduce the variations in the subbands pseudo-energy magnitude. So, the SVD/PCA analysis can be applied to obtain the hidden structure in the matrix as established in (9).

*Note 1. *The idea behind this is to consider the noise and other noise-like variations as hidden structures of the sensors or, in other words, a sensor pattern.

By decomposing in to singular values (), right eigenvectors (), and left eigenvectors () and using (9),

and selecting its principal components (PCA) as stated in (10)–(13), (18) becomes

where , and are partial matrices obtained from , and according to PCA. The matrices and possess the same dimensions, but with different content.

From PCA theory, it is known that is an orthonormal matrix that performs a rotation; then there exist, a null space projection matrix , so that

In the sequence, applying in (19), we obtain,

The meaning of (21) is that pseudo-energy vector obtained from buffer with certain number of samples at any time can be projected into a null space. So, the idea behind this is that the matrix represents a *signature* of the sensor in the normal operational condition.

Performing the operation given in (21) over pseudo-energy vectors (see (17)) a dispersion about zero can be obtained (see 2D example in Figure 4). Now, assuming the rank of as , the Euclidean distance () of the th projection point is given by

As a consequence, the mean distance and its variance are given by

respectively.

###### 3.2.2. Noise Effect Reduction

The noise influence can be reduced by using a normalization technique as presented by de Oliveira [12]. In this technique the following model is assumed:

where is a coefficient vector associated to the fault and is the sensor noise vector.

Assuming the average pseudo-energy at each subband as the autocorrelation of at and that and are uncorrelated, the energy equation becomes

and the normalization is given by

Considering the variation of the noise energy between buffers, (27) can be rewritten as

where the th subband average energy () considering buffers is given by

Therefore, the normalized pseudo-energy matrix becomes

and can be used instead of ((17)).

#### 4. Results

##### 4.1. FI Algorithm

The analyses of algorithm performance were made upon two different data sets: One for analysis and calibration (alpha) and the other for validation (beta). Using the model developed in Section 3.2, wavelet db4 (Daubechies 4), three decomposition levels (), and a time series (alpha) with 51200 samples divided in to 200 buffers of dimension 256 samples, the matrices and were constructed. The dimension of the matrices is given by number of subbands () and the number of buffers. In this case, the rank of the matrices is seven (seven singular values). From those matrices with seven singular values four components were identified that represent more than 90% of the total energy (four principal components). With rank seven and using the first four greatest singular values, the matrix (or ) will present the dimension (see –), and consequently, a tridimensional projection ( and ) according to the idea represented in Figure 4. In order to visualize, efficiency of the proposed model were performed the following operations: (a) rotation: ,(b) normalized rotation: ,(c) parity space projection: ,(d) normalized parity space projection: ,

whose tridimensional projections are shown in Figure 5 as scatter plot. In those four projections, the increase of the algorithm performance is straightforward when the parity space projections approach is used (Figures 4(c) and 4(d)) compared to rotations (Figures 4(a) and 4(b)). Using (24), a spherical threshold was established in order to verify the consistency of the proposed model in terms of probability of false alarm (PFA), probability of miss detection (PMD), and correct decision (CD). In Figure 6 the tridimensional projection of the normalized parity space of the sensor noise decomposed by wavelet packet db4 at level . As expected, the most of noise projections (about 97%) remains inside of the sphere, indicating a low occurence of false alarm. With a step bias fault randomly inserted in time into all of 200 buffers, it is expected the indication of 200 fault projections. In Figure 7 it is shown this operation comparing the noise and fault projections for the case of normalized parity space. The fault is of step bias kind with magnitude 0.8°/s.

**(a)**

**(b)**

**(c)**

**(d)**

After extensive analyses comparing wavelet families (db2, db4, db6, haar, symlet 3, coiflet 1, and biorthogonal 3.1), decomposition level, buffer dimension, and fault level, it is summarized in Figures 8 and 9 the best set of parameters to be used in the construction of the parity space projection matrix. Therefore, the best parameters found in order to used in the validation tests are WP db6, buffer size 64, and decomposition level . The respective projection matrices () to be applied in validation tests are given by –. So, using these parameters in another data set obtained with different movements of the IMU, it was found the results summarized in Table 2, where the low performance of gyro is explained as a result of its higher noise level with respect to the others sensors.

#### 5. Conclusion

In this paper, it was developed a FDI algorithm based on signal processing techniques applied to IMU with minimal redundancy of fiber optic gyros. This unity was built with low-quality FOGs and a prototype of acquisition system that generates a high quantity of impulsive noise, justifying the use of recursive median filter. The first of median filters (MF-1) was applied to remove the high-level impulses, outliers with huge magnitude arising from data acquisition process. This filter is adjusted to introduce a short delay. The second filter (MF-2) was used to reduce the low-level impulses arising from parity vector operations and,consequently, to improve the fault detection performance. However, this filter adds a new delay in the detection process, being greater than MF-1. The fault detection part of the algorithm is used to guarantee the reliability of the isolation part, avoiding the actions when the fault occurrence is not confirmed by -CUSUM algorithm. In the fault detection part a calibration referenced on fault isolation characteristics was developed, mainly the dimension of the buffer used in the wavelet decomposition. The fault detection results demonstrated the effectiveness of the algorithm at low level of fault with short delay (about 10 samples for 0.3°/s step bias fault) [6]. The fault isolation algorithm, based on parity space projection of subband energy obtained from wavelet packet decomposition, provided expressive results in terms of correct decisions (CD for 0.6°/s step bias fault). This parity space projection of subband energy is an extension of clustering analysis based on PCA. The step bias fault was used in both algorithms in order to verify the sensibility, but this algorithm can detect other kinds of faults that perform variations in the spectral contents of the sensors.

#### Appendix

#### Calibration of the FI Algorithm

By following the development presented in Section 3 (*FI algorithm*) normalizing the pseudo-energy matrix (30) the following values were obtained for the FI blocks.

For the subband average energy (29):

Gyro #1:

Gyro #2:

Gyro #3:

Gyro #4:

For the normalized null space projection matrix (21) using (30):

Gyro #1:

Gyro #2:

Gyro #3:

Gyro #4:

For the threshold-square root of (24):

Gyro #1:

Gyro #2:

Gyro #3:

Gyro #4: