Mathematical Problems in Engineering

Volume 2015 (2015), Article ID 592480, 9 pages

http://dx.doi.org/10.1155/2015/592480

## Gravity Matching Aided Inertial Navigation Technique Based on Marginal Robust Unscented Kalman Filter

^{1}Key Laboratory of Aviation Information Technology in Universities of Shandong, Binzhou University, Binzhou 256603, China^{2}Naval Institute of Hydrographic Surveying and Charting, Tianjin 300061, China

Received 15 April 2014; Revised 20 August 2014; Accepted 28 August 2014

Academic Editor: Kyung Y. Kim

Copyright © 2015 Ming Liu and Guobin Chang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper is concerned with the topic of gravity matching aided inertial navigation technology using Kalman filter. The dynamic state space model for Kalman filter is constructed as follows: the error equation of the inertial navigation system is employed as the process equation while the local gravity model based on 9-point surface interpolation is employed as the observation equation. The unscented Kalman filter is employed to address the nonlinearity of the observation equation. The filter is refined in two ways as follows. The marginalization technique is employed to explore the conditionally linear substructure to reduce the computational load; specifically, the number of the needed sigma points is reduced from 15 to 5 after this technique is used. A robust technique based on Chi-square test is employed to make the filter insensitive to the uncertainties in the above constructed observation model. Numerical simulation is carried out, and the efficacy of the proposed method is validated by the simulation results.

#### 1. Introduction

Inertial navigation system (INS) is self-contained because it neither emits nor receives radio signals. However, as a dead-reckoning system, the navigational errors grow without bounds due to the sensor errors and initial errors both of which are inevitable in practice [1, 2]. Due to the inherent complements between INS and the global navigation satellite system (GNSS), they can be integrated and the growing errors in the former can be restrained by the latter. The reception of navigation signal from the GNSS satellites destroys the self-containedness of INS which limits the application of INS/GNSS integrated system in some special situations. It is of significant importance to find an aid for INS with the following two merits; that is, it can provide modification information to restrain the growing errors of INS, and the self-containedness of INS should be preserved. The gravity of the earth is an intrinsic physical field attached to the earth. The gravity anomaly can be viewed as the function of location on the surface of the earth [3], and hence it reflects to some extent the information of the positions. So the gravity, or the anomaly specifically, can be used for navigation in the vicinity of the earth surface. Gravity matching navigation (GMN), an analogue of the terrain matching navigation (TMN), extracts positional information through matching the measured gravity (anomaly) in real time with the gravity database predetermined and stored in the navigation computer. The merits of the GMN are two-fold; first, its precision does not decrease in contrast to the case of INS; second, it is free of receiving and emitting radio signals; that is, it is self-contained, in contrast to the case of TMN. So it is of strategic significance to aid INS using GMN. In fact, GMN aided INS has been widely studied recently; see, for example, [4–6].

Kalman filter (KF) is one of the most celebrated real-time parameter estimators and has been implemented widely in several disciplines, especially in the community of integrated navigation where the extended Kalman filter (EKF) has in fact become a standard component in the commercial INS/GNSS product [7]. Although, Kalman did not make the assumptions about the linearity, the Gaussianity, and the whiteness in the seminal paper [8], it can be proved that KF is optimal in the sense of unbiasedness, consistency, and asymptotical efficiency only when the above three assumptions are valid [9]. Many efforts have been made to extend KF to address the nonlinearity [10], the non-Gaussianity [11, 12], the cross-correlated process and measurement noises [13, 14], and the colored noises [15].

KF is applied in GMN aided INS in this contribution. The dynamic state space model for KF is constructed as follows. The error equations of INS are adopted as state process equation. A local gravity model is established through 9-point surface interpolation and is adopted as the measurement equation for KF. The unscented Kalman filter (UKF) is employed to address the nonlinearity of the above constructed measurement equation. Two refinements to the UKF are also proposed. First, the marginalization technique is used to exploit the conditionally linear substructure of the nonlinear measurement equation, so the computational load is substantially reduced. Second the filtering algorithm is robustified against the severe uncertainties of the measurement model [16]. Finally the proposed method is applied in a simulation study, and its efficacy is validated by the simulation result.

#### 2. Model Establishment of the GMN Aided INS

In order to apply KF, the dynamic state space model of the GMN aided INS problem should be constructed or in other words the state process and measurement equations should be constructed. In this contribution, the error equation, other than the basic equation of the INS used in some literatures, see, for example, [17], is employed as the state process equation. And the local gravity model constructed through 9-point surface interpolation is adopted as the measurement equation.

##### 2.1. State Process Equation

According the principles of INS [18], the dynamics of the errors of the INS is characterized by the following differential equation: where the state vector is as follows: The elements of which are north, east, and downward attitude error angles, the north and east velocity errors, and the latitude and longitude errors, respectively, and the process noise vector is constituted as The elements of which are measurement errors of the three gyros and the three accelerometers, respectively. These six error terms are treated as Gaussian noises in this contribution. The elements of the state transition matrix and the process noise matrix are given explicitly in several textbooks about INS (see, for example, [18]) and they are omitted here for the sake of brevity.

It is noted that the downward velocity error and the height error are not included in the state vector or in other words the vertical channel is not considered. The reason is as follows. There is a severe divergence in the errors of the vertical channel of the pure INS, and external information is usually necessary to damp this channel to constrain the divergence. Fortunately this kind of external information is always available, for example, the height obtained through measuring the air/water pressure [19]. In the presence of the external information, on one hand, the vertical errors become relatively trivial and, on the other hand, the character of these errors depends on some other factors, so there are no uniform equations in general to characterize their dynamics.

By integrating both sides of (1), we obtain the discretized equation, or the difference equation, as follows: where where and is the updating interval of the INS.

##### 2.2. Measurement Equation

Summing the INS outputted latitude and longitude with the corresponding error estimation, the estimated position is obtained. Nine gravity anomaly data together with their position information nearest to the estimated position are selected from the gravity database. And the interpolation function is constructed as follows: where is the th gravity anomaly extracted from the gravity database; is a 3-by-3 matrix to be estimated; and are the latitude and longitude corresponding to the th gravity anomaly. There are 9 elements to be estimated in , and 9 equations corresponding to the 9 gravity anomalies as in (6), so this proper-determined set of equations can be solved to get readily.

Let where indicates the th row and th column elements of , and we have where denotes the Kronecker product.

Let We have So we have where And denotes the measurement errors or noises which represents the gravity measurement errors, gravity database errors, the model errors, and so on. From (14), it is readily found that the term of the highest order is , which is 4th ordered, so this measurement equation is relatively strongly nonlinear.

#### 3. The Marginal Robust Unscented Kalman Filter

In the above constructed dynamic state space model, the involved values are (1) the state vector at the th epoch which is to be estimated, (2) the measurement vector which can be measured, (3) state process noise which represents the uncertainty of the state process model, and (4) the measurement noise which represents the measurement errors and the uncertainty of the measurement model. Without loss of generality these four values are assumed to obey the following equations:

Let be an estimate of the state vector at th epoch providing the measurements up to and including th epoch. Specifically and are also called* a priori* and* a posteriori* estimate, respectively. Let be the (cross)covariance between the vectors and . is obtained in KF as follows:
where is often called innovation vector [20]. For the model in (16) and (17), generally we have [21]
where and denote the covariance of and , respectively.

In general for nonlinear system as (16) and (17), (19) through (23) are not analytically tractable, and hence some approximations are necessary. Different approximations result in different nonlinear KFs. The most simple and most straightforward strategy is to retain only the zeroth and first order terms in the Taylor series of the nonlinear equations. Then (19) through (23) can be solved; this results in the celebrated EKF. High-order terms, the second order ones say, can be retained in the hope of getting higher accuracy; this results in the second order filters (SOF) [22].

##### 3.1. Unscented Kalman Filter

In the presence of strong nonlinearities, there are some shortcomings of the simple linearization technique in EKF and SOF. The accuracy is often not satisfactory; divergence of the filter even occurred in some situations. Moreover, the Jacobian matrix in the EKF may be rather hard to get; sometimes it even fails to exist; for example, in the black box model, given any input permissible, we can get the output readily; however, the mapping function from the input to the output is not explicitly available; its Jacobian matrix can by no means be calculated [23]. These shortcomings cannot be avoided in SOF in general. First, the accuracy improvement by SOF compared to EKF is rather limited except for some special cases, for example, that the equation happened to be quadratic and the SOF is exact in the propagation of the mean and covariance [24]. In fact in the case where EKF cannot converge SOF cannot either in most cases. Second, if the Jacobian matrix is hard to be calculated, the Hessian used in the SOF will be harder. As a matter of fact SOF found rather limited applications [20].

In 1990s, many new-concept nonlinear filters are proposed in order to overcome the problem concerning the accuracy and applicability of the EKF and SOF, some of which belong to the same family, that is, the deterministically sampling, derivative-free ones. These include the UKF [23], the central difference filters [25], the divided difference filters [26], the cubature Kalman filters [27], and the sparse grid nonlinear filter [28]. There are also some other interpretations about these filters; see, for example, [29, 30]. These filters are named together the sigma point Kalman filters in [7]. These filters have the merit of being more accurate and derivative free. In this contribution the UKF is introduced in the most heuristic manner following [23], after which two refinements are provided in order to reduce computational burden and improve the robustness against the uncertainties in the measurement model.

As a variation of KF, UKF also employed the linear structure of the KF. The core of UKF which distinguishes itself from other nonlinear filters is a method called the unscented transformation (UT) to propagate the mean and covariance through nonlinear mapping functions. Without loss of generality, UT aims to solve the following problem.

*Problem*. Given an arbitrary nonlinear equation and the mean and covariance , of the input variable , we want to know the mean , the covariance , of the output variable , and the cross-covariance between and .

*Solution*. There are three steps of UT.

*Step *1* (sampling)*. Based on and , a set of sigma points (SP) is deterministically generated as follows. Note that denotes the dimension of the vector :
where denotes the th column vector of the matrix which is the square root of the matrix . The square root matrix can be obtained through efficient Choleski decomposition [23] or the stable singular value decomposition [24].

The associate weights of the SP are as follows: where and are used in the calculation of mean and (cross)covariance, respectively. The scaling parameter is , where , , and are tuning parameters. Different parameters result in different UT sampling strategies and hence different versions of UKF; details about selecting the parameters can be found in [10, 23, 24, 31].

*Step *2* (propagating)*. Substitute SPs into the nonlinear mapping function; a set of SP representing the output variable can be obtained. Consider

*Step *3* (inferencing)*. We get the following statistics using the propagated SPs in the previous step:

This is the end of the solution.

Apply UT to the state space equation (16) to get* a priori* mean and covariance at the current epoch from* a posteriori* mean and covariance at the previous epoch; it is essentially the prediction stage of KF or the time update stage. Apply UT to (17) to get the predicted mean and covariance of the measurement from* a priori* mean and covariance of the state and then substitute these values into the framework of the KF, the* a posteriori* mean and covariance at the current epoch are obtained; it is essentially the correction stage of the UKF or the measurement update stage.

##### 3.2. Marginal Unscented Kalman Filter

The above introduced UT and UKF apply to general nonlinear process and/or measurement equations ignoring the specific structure of these nonlinear equations. Can the computational burden be reduced without decreasing the accuracy for some special nonlinear equations? It is fortunate that the answer is positive. In this contribution a special family of nonlinear equations with conditionally linear substructure is studied. The UKF is refined using the marginalization technique to exploit the substructure, and the computational burden is reduced through using less SPs and the high accuracy and easy applicability of the UKF are also preserved. The refined algorithm is named marginal unscented Kalman filter (MUKF) with its core called marginal unscented transform (MUT), which has been used in iterated filters [32] and filters for cross-correlated process and measurement noises [13]. Specifically, for the nonlinear equation (14), the output variable is the function of only part of the elements of the input variable, that is, the latitude and longitude (error) of state vector, so in the MUT, only the SPs representing the latitude and longitude need to be generated. As a result, the number of SPs is significantly reduced, from 15 to 5, after the MUT is employed rather than the UT.

Without loss of generality, a nonlinear equation with conditionally linear substructure is as follows: where is nonlinear equations while is an arbitrary equation (nonlinear or linear).

Before introducing MUT, a lemma called the conditional Gaussian lemma is presented. It says the following: for two jointly Gaussian variables and , conditioned on is still Gaussian.

Lemma 1. *Let
**
be Gaussian with mean and covariance
**
Then conditioned on is still Gaussian with mean and covariance
*

*Proof. *A heuristic proof can be found in, for example, [13].

*In fact, from (34), it can be found that is the function of . So for the sake of apparentness we denote in the sequel.*

*From (30) the mean of is
where and is the probability density function of . It is equivalent to say that the mean in (36) is obtained through propagating through the nonlinear mapping function . So it can be approximated with rather high accuracy using UT
where is the propagated SP. In this UT, SP representing only is needed. The covariance of is
The above calculation can also be approximated using UT
where is representing the SP of . The cross-covariance between and is
Again it can be approximated using UT
Equations (37), (39), and (41) demonstrate the formulation of the MUT. Replacing UT with MUT in the framework of UKF, we get the MUKF.*

*Qualitative analysis of the computational burden is given here. In the standard UT, an matrix needs to be decomposed (Choleski decomposition or singular value decomposition) to generate SPs, while in the MUT only an matrix needs to be decomposed to generate SPs, that is, . Additionally, in MUT needs to be inversed and some matrix algebras need to be carried out to get SPs representing , that is, . The matrix decomposition of in generating SPs can also be used in inversing , so some computations can be saved. Overall, in generating SPs, that is, in the sampling step, the computational burdens of UT and MUT are comparable. But in the second and third steps, that is, in propagating SPs and calculating the propagated mean and (cross)covariance, only SPs are used in MUT while are used in UT. Apparently, as , less computation is needed in MUT than in UT; especially in the case when the computational savings will be significant.*

*3.3. Robust Unscented Kalman Filter*

*Gaussian distribution is widely assumed in many statistic inference applications, including the classic least squares method and the KF. This is due to some elegant properties of this distribution, for example, the Gauss-Markov theorem and the central limit theorem [33]. Also it is rather simple hence making many problems tractable because it can be characterized by the first two moments, for example, the mean and the covariance. However, in many realistic applications, the errors may not necessarily be Gaussian. Real data may follow a heavy-tailed distribution implying that the occurrence of outliers may be more frequent than that in a strict Gaussian distribution [34]. Outliers may not necessarily be gross errors, but the probability that an outlier is gross error will be rather larger than that it is not. Naturally, a question should be asked that, in the case that the assumed Gaussian distribution can only approximate the real data, can the estimate by the least squares method or the KF effectively approximate the true value? It is unfortunate that the answer is negative. As a matter of fact, when the real distribution deviates from the assumed Gaussian distribution, the performance of KF will degrade severely. This is due to its lack of robustness. More specifically, in the presence of outliers, even only a few, the accuracy of KF will decrease significantly; what is more serious is that the filter will diverge.*

*Non-Gaussianity may occur in either process or measurement noise or both. However, in this contribution, it is our basic assumption that it only occurs in measurement noise. This assumption makes sense in many real-world applications. It is noted that there are some methods with robustness against non-Gaussianity in both process and measurement noises [12, 35, 36]; however, in these methods, measurement redundancy is often needed [35]; at the same time, the problem should be solved iteratively [36]. In many applications, the measurement redundancy is not available, and the iteration should be avoided to ensure real-time implementation. As to the problem studied in this contribution, the state process equation, that is, the INS error equation, effectively represents the true dynamics of the errors, and the process noise, that is, the errors in the gyros and accelerometers outputs, can be safely treated as Gaussian distributed after carefully calibration and initial alignment. However the measurement model, that is, the local gravity model constructed through 9-point surface interpolation, may be unable to represent the real gravity field effectively, because the gravity anomaly varies in a rather irregular manner. Moreover, the ability of precise gravimetry can by no means be thought to be satisfactory nowadays, especially in the case of real-time (not after processing) and self-contained (without use of GNSS) situations. So the ideal Gaussian distribution may fail to exactly characterize the real measurement errors or in other words the real error may follow some kind of non-Gaussian distributions.*

*In this contribution, a new robust method, based on test, is proposed to resist the influence of the non-Gaussian distribution of the measurement noise. This method can be interpreted as follows. If the measurement noise does follow the assumed Gaussian distribution, then the real measurement vector will also be Gaussian (at least approximately) with the mean being the predicted measurement in (21) and the covariance being the covariance of the innovation vector in (22). Then the square of the Mahalanobis of the real measurement from the predicted measurement should be distributed with the degree of freedom being the dimension of the measurement. Let this square be the judging index and carry out test to check the real measurement. Consider the assumed Gaussian distribution of the measurement noise as the null hypothesis. Set the significance level to be a small number, say , and the corresponding quantile can be determined according to the distribution under the null hypothesis. If the probability that the judging index, that is, the value, is larger than the quantile, the null hypothesis should be rejected. It means that when the real measurement is too distant from the predicted one, we can consider this measurement as an outlier with rather high probability, that is, . And to this point, we say that the outlier is detected. In the presence of the outlier, a scaling factor is introduced to tune the filter to be robust against this outlier. The robustness is obtained because the scaling factor, being larger than 1, inflates the covariance of the innovation and hence reduces the weight of the real measurement in the update of the filter. Finally the influence of the outlying measurement is restrained. The scaling factor is solved as follows. For the predetermined -quantile, following the null hypothesis, we have
Define the judging index as
where is the Mahalanobis distance of from . Under the null hypothesis, the judging index is distributed with degree of freedom being the dimension of the measurement, say . For a predetermined significance level , the quantile can be calculated, that is, ; then we have
where denote the probability of an event. If (44) does not hold, then introduce the scaling factor and let
By solving (45) we have
*

*Hajiyev and Soken use to detect outliers other than (44) [37], as and ; it is easy to find that their method neglected the off-diagonal elements of , which is not reasonable. Moreover, the statistical basis is not clear, and the corresponding significance level is hard to determine.*

*4. Simulation Study*

*As in what was previously mentioned, the errors of the inertial sensors, that is, the gyros and accelerometers, are treated as random noise. We further assume that they are white and Gaussian distributed. The statistics of the noises, or specifically the variance or standard deviation, should be determined to determine the parameter of the process noise. However, these sensor errors are often characterized as random walks, that is, the angular random walk (ARW) for the gyros and velocity random walk (VRW) for the accelerometers. Note that for the accelerometer the random errors may be characterized bias uncertainties other than the VRW. We should calculate the variance or the standard deviation from the random walk values; the details can be found, for example, in [19]. The parameters in the simulation are illustrated in Table 1.*