About this Journal Submit a Manuscript Table of Contents
Mathematical Problems in Engineering
Volume 2013 (2013), Article ID 819565, 11 pages
http://dx.doi.org/10.1155/2013/819565
Research Article

A Novel Fusion Scheme for Vision Aided Inertial Navigation of Aerial Vehicles

College of Mechatronics and Automation, National University of Defense Technology, Changsha 410073, China

Received 17 April 2013; Revised 12 August 2013; Accepted 19 August 2013

Academic Editor: Rafael Martinez-Guerra

Copyright © 2013 Ming Xiao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Vision-aided inertial navigation is an important and practical mode of integrated navigation for aerial vehicles. In this paper, a novel fusion scheme is proposed and developed by using the information from inertial navigation system (INS) and vision matching subsystem. This scheme is different from the conventional Kalman filter (CKF); CKF treats these two information sources equally even though vision-aided navigation is linked to uncertainty and inaccuracy. Eventually, by concentrating on reliability of vision matching, the fusion scheme of integrated navigation is upgraded. Not only matching positions are used, but also their reliable extents are considered. Moreover, a fusion algorithm is designed and proved to be the optimal as it minimizes the variance in terms of mean square error estimation. Simulations are carried out to validate the effectiveness of this novel navigation fusion scheme. Results show the new fusion scheme outperforms CKF and adaptive Kalman filter (AKF) in vision/INS estimation under given scenarios and specifications.

1. Introduction

Over the past decade, aerial vehicles have been widely developed and used in military and civilian cases, such as reconnaissance, surveying, mapping, and geophysical exploration [1]. Navigation focuses on monitoring the movement of vehicles and has been a key component in the application of aerial vehicles. Therefore, it is of significance to develop reliable navigation technologies to ensure that the vehicles know their own positions and orientations during missions or tasks.

Inertial navigation system (INS) is the most applicable mode since no external references are required to determine its position, orientation, or velocity once it is initialized [2, 3]. However, as a dead reckoning process, it lacks accuracy in long-term performance because of the noises and biases contained in inertial measurements [4].

Several navigation frameworks aided by global positioning system (GPS) [1, 5], vision [1, 610], or terrain [11, 12] are usually employed to restrict the growth of INS error. Along with the development of visual sensors and matching algorithms, the vision matching method is a quite potential mode [1316]. Moreover, infrared and radar imaging ensures that the vision-aided navigation is available regardless of darkness, clouds, or rain (all-weather condition). Navigation accuracy is also improved by these high-resolution measurements. Vision-aided inertial navigation has received considerable attention from scientists and engineers in the past few years [1, 610]. In image registration (or matching), the sensed images are geometrically overlaid on the reference images prestored in the vehicles [17, 18]. That is, the position and attitude of a vehicle can be produced by an onboard matcher during flight. The vision-aided approach has advantages in long-term navigation because the matched information is independent of the INS and has no accumulation effect [8].

The well-known conventional Kalman filter (CKF) is widely implemented in vision/INS integration systems. CKF is a variance-minimizing estimator which fuses various information sources [1921], but CKF treats information sources equally, regardless of their origins or principles.

In terms of vision-aided navigation, the matching performance is related with the source reference images. For instance, images from ocean or desert have fewer matchable features, and therefore, the matching error may exceed hundreds of meters. In that case, CKF is not directly suitable for vision/INS integrated navigators because of the incorrect measurement-noise model.

A key step in the performance of the Kalman filter is to determine the appropriate measurement-noise covariance matrix [22]. To solve this problem, several adaptive Kalman filter (AKF) approaches have been proposed to estimate noise covariance [2325], and most of them estimate by using the innovation or the residual sequence [26, 27]. That is, the estimated covariance matrix of the innovations or residuals should match its theoretical form [28]. AKF estimates by using the previous observations. But for vision-aided navigation, because of the context of source image, matching error may suddenly be transferred from the current epoch to the next epoch. Therefore, AKF may not obtain the appropriate , which will slow the convergence of filtering response and degrade filtering performance.

Different from AKF, a novel fusion scheme was proposed and developed in this paper to automatically set higher weights to reliable vision navigation data and vice versa. The scheme fuses the vision and INS estimates by virtue of reliability. The reliability of image matching data relates to image context and can be calculated transcendentally. It is the probability of successful matching, which is higher in feature-rich areas and lower in feature-insufficient areas. The matching error is divided into two parts: constant error and reliability error. Constant error is decided by vision sensor instrument, installation, and image registration (matching) algorithm, while reliability error is caused by the matching reliability.

The remainder of this paper is organized as follows. The vision/INS navigation fusion problem is formulated in Section 2, and then a novel fusion scheme is proposed and developed for vision-aided inertial navigation in Section 3. The proposed scheme is proved to be optimal to minimize the variance of the mean square error (MSE) estimation. In Section 4, simulations are carried out to validate the effectiveness of the developed vision/INS fusion scheme in comparison with CKF and AKF. Finally, concluding remarks are made in Section 5.

2. Formulation on Vision/INS Data Fusion Problem

In application of vision-aided navigation, CKF is able to obtain the optimal weighted mean value by combining the vision and INS data of its independent estimates [20]. The fusion scheme of CKF is formulated in this section. Both estimates are considered reliable by CKF. But because of the uncertainly in image registration, the position matching error can exceed hundreds of meters in some area with less information content, sometimes even leading to filtering divergence. Hence, this challenge of vision/INS fusion is analyzed.

2.1. Conventional Fusion Scheme

CKF is a popular fusion scheme for integrated navigation. Its scheme of one-dimensional data fusion can be formulated as follows [20]. For one-dimensional discrete-time signal , there are two estimates and which can be expressed as follows: where denotes a normal distribution; and are both the error between the estimated signal and the original signal. Data fusion according to minimum-variance restriction is expressed as [20]: or where is the optimal weight factor. When (4) is extended to multidimension, the equations of CKF can be derived [20].

2.2. Challenges of Vision/INS Fusion

A vision-aided navigation sensor can obtain the position and attitude of a vehicle. This paper only concerns the simple situation where position is the only observation.

In vision-aided navigation, when an aerial vehicle passes areas with less information content (e.g., ocean, desert, and campagna), the position matching error can exceed hundreds of meters, leading to a changed measurement-noise covariance matrix. In this circumstance, CKF is not the optimal fusion scheme, and thus the weight factor in (4) should be changed. Furthermore, not only the errors between the estimated signals but also the reliability of estimated signals should be considered, and the reliability of image matching data relates to image context and can be calculated transcendentally. It is the probability of successful matching, which is higher in feature-rich areas and lower in feature-insufficient areas. Thus, a reliability-based Kalman filter (RKF) was proposed. Figure 1 explains the difference between CKF (left) and RKF (right), and the INS signals of both CKF and RKF were considered reliable (always equals to 1). In contrast with CKF, the reliability of image matching data was considered in RKF. Or, in other words, CKF is the special case of RKF in which the reliability is always equals to 1.

819565.fig.001
Figure 1: Conventional Kalman filter (left) and reliability-based Kalman filter (right).

One-dimensional matching results via a vision sensor can be expressed as follows:

Matching error is divided into two parts: constant error and reliability error . is the white Gaussian noise decided by the vision sensor, installation, and image registration (matching) algorithm and is constant and independent of the region of the image; represents the miss-matching error due to unreliability. Without any prior knowledge, uniform distribution is a suitable assumption for miss-matching. Thus, at epoch , the probability density function of satisfies where is the reliability of , is the range of , and is the ideal position matching result at epoch . In other words, (6) shows that when the constant error is zero, the reliability is the probability that the estimated signal is equal to the original signal. When , which means the vision information is always reliable, (5) is equivalent to (2).

In vision-aided navigation, matching reliability can be computed transcendentally by analyzing the reference image before a flight mission off-line. For instance, a feature-rich region has a higher reliability while a smooth region such as campagna has a lower reliability. CKF is not the optimal fusion scheme via the above assumptions. Therefore, a novel vision-aided navigation fusion scheme considering reliability is proposed in the next section, and it is proved to be the optimal fusion scheme since it minimizes the variance in terms of MSE estimation.

3. Novel Fusion Scheme for Vision-Aided Navigation

In this section, a novel vision-aided navigation Kalman filter considering reliability (RKF) has been proposed in Section 3.1. Then we introduced the reliability-based fusion algorithm in Section 3.2, and it is proved to be optimal with minimum variance restriction in theorem.

3.1. Scheme of RKF for Vision/INS Fusion

With the local level frame of North-East-Down (NED) as the navigational frame, the dynamics of INS can be expressed as In (7), where , are the velocity errors in the north and east, respectively; , , and are the attitude errors; , , and are the gyro’s biases; , , and are the accelerometer’s biases:where is the geographic latitude; is the geographic longitude; is the skew matrix of the Earth’s rotation rate; is the Earth’s transverse radius; is the accelerometer’s measurement, and is the vehicle’s velocity; and where is the elements of the direction cosine matrix . The measurement model is expressed as where is the measurement noise. The observation is the difference of positions between INS and vision: The corresponding measurement matrix is:

The measurement noise matrix in CKF can be expressed as

The steps in RKF are as follows.

Inputs. Range of reference image .

Position error between INS and image matching .

Matching reliability of position for the vehicle at epoch , let .

RKF Algorithm

Step 1. The possible range of and is and , respectively. Therefore, the scale factors are

Step 2. Compute the mapped observation error according to (15)

Step 3. Let

Step 4. Update the measurement noise matrix as follows:

Step 5. Execute the CKF with .

3.2. Reliability-Based Fusion

In this section, the reliability-based fusion scheme is proposed with minimum-variance restriction.

Condition. The following assumptions hold.(a) Rewrite (5) into (b) where denotes the positioning result via the vision sensor. At epoch , the probability density function of satisfies where is the reliability of and is the range of .

Theorem 1. A reliability-based fusion is given as follows.
The fusion algorithm can be expressed as
Finding the optimal factor with minimum-variance restriction is equivalent to Let
Then the weight factor can be expressed as

The weight factor is consistent with (17) which is extended to two dimensions. Then we will prove that (24) is the optimal weight factor with minimum-variance restriction.

Proof. The weight factor in (24) is the optimal theoretically. This is shown from (25) to (52).
On the right side of (19), the two terms with are independent of each other. According to the property of the variance, then Obviously, and thus only is needed to obtain .
At epoch , let , , and ; then the probability density function is expressed as where is the Kronecker delta function. The expectation can be computed as
As is the first type of discontinuity point, (28) can be expressed as
The variance can be expressed as Therefore,
The two terms on the right side of (31) are defined as and ; then
In Kronecker delta function , Therefore,
To simplify the calculation, is mapped into ; and are mapped to and , respectively. They can be expressed as
is defined as the scale factor
After mapping, let Likewise,
Let ; then
By adding (40) and (41),
According to (25), (26), and (42), Let Therefore, Let , ; then The value of , which minimizes , is obtained by differentiating the above equation with respect to . Hence, Therefore, the optimal weight factor according to minimum variance restriction is
According to (35),
From (21), (35), and (44), The corresponding data fusion formula is
Hence, (24) was proved. But in real application, it is usually unable to obtain the value of . Therefore, is used instead: Apparently, when , , the fusion formula is equivalent to the traditional form in (4). In other words, the traditional fusion method is a special case of the proposed data fusion method.

4. Simulation and Discussion

4.1. Scenarios and Specifications

A Matlab/Simulink platform is constructed for simulation of vision-aided inertial navigation. A 600 × 1600 of 4 m resolution synthetic aperture radar (SAR) image is used as the reference. The red lines in Figures 2(a) and 2(b) show the flight trajectory in the real world (55 s) and the reference image, respectively. At each epoch during simulation (50 Hz), the reference image was cropped into a rectangular subimage of 150 × 150 at the central point , which denotes the vehicle’s position at that epoch. Then the observed image is obtained by adding white Gaussian noise. And the starting point is in reference image.

fig2
Figure 2: Flight trajectory.
4.2. Generation of Matching Reliability

Figure 3 illustrates the generation of matching reliability. With a single reference image (Figure 2(b)), 10 fully-observed images were obtained by adding different Gaussian white noises (). As shown in Figure 3, at each point in the flight trajectory, the observed images are obtained by cropping the fully-observed image at that point, and thus 10 fully-observed images will generate 10 different matching results. The matching reliability can be obtained by analyzing these matching results (elaborated in the next paragraph), and the 10 image matching errors were illustrated in Figure 4. Apparently, the matching error increases because of the increased noise of the observed image. The matching error finally converges because at the beginning of the flight, the image is very smooth and can easily cause large matching error; after 10–30 sec, the image’s context becomes relatively rich, which allows more reliable matching.

819565.fig.003
Figure 3: Generation of matching reliability.
fig4
Figure 4: 10 image matching errors, different figure represents image matching errors by using different fully observed image.

At each point in the flight trajectory, if the error is less than 4 m (1 pixel), the matching is regarded as correct; otherwise, it is incorrect. Hence, and in (14) are set as 4 m, which is the constant error in (5). The matching reliability of point is where denotes the total number of matches for a single point in the flight trajectory; is the number of correct matches. Matching reliability is shown in Figure 5. It illustrated the matching reliability of point at the position for vehicle at epoch .

819565.fig.005
Figure 5: Matching reliability versus time, it represents the matching reliability of point at the position for vehicle at epoch .

Figure 5 clearly shows that the matching reliability gradually increases with time. It is consistent with Figure 2(b) that the features in the reference image increase during the flight course.

4.3. Integrated Navigation Results and Discussion

Tables 1 and 2 show the initial conditions and the specifications of the INS, respectively.

tab1
Table 1: Initial conditions.
tab2
Table 2: Specification of INS.

By using the scheme of integrated navigation reliability-based Kalman filter (RKF) described in Section 3.1, we randomly choose an image matching result for our experiment (Figure 4(j) is used in the test).

Figures 6(a) and 6(b) show the CKF, RKF, and AKF position errors of integrated navigation. Figure 6 also shows that in this condition with miss-matching, the integral navigation error from CKF is extremely large so CKF cannot fulfill the integrated navigation requirement. Furthermore, the performance of integrated navigation was improved significantly by using matching reliability in the proposed method (RKF). The position error converged to within 2.5 m.

fig6
Figure 6: Latitudinal error (a) and longitudinal error (b) of integral navigation; RKF was compared with AKF and CKF.

Furthermore, compared with innovation-based AKF [27] (window size ), different window experiment results have similar performance and AKF has a slower response since it estimates the measurement noise covariance matrix by using the previous observations, while RKF estimates via the matching reliability at each single point.

5. Conclusions

A reliability-based fusion scheme was proposed and developed for the vision-aided inertial navigation systems. Different navigation sources are treated equally in the CKF scheme. As for vision-aided navigation, miss-matching may cause large position errors. It can lead to deficiency of the vision/INS integrated navigation system if two information sources are treated equally in the fusion method. Therefore, a novel fusion scheme is presented, which regulates the fusion weights in terms of reliable extents of the reference scenes. Higher weights are set to the matched information overlaying on the more reliable regions, and vice versa. Furthermore, the proposed fusion scheme is also proved to minimize the integrated estimated variance such that MSE is achieved. Simulations are also used to validate the effectiveness. The proposed scheme has improved the performance of the integrated navigation, compared with CKF and AKF.

Acknowledgments

This work is jointly supported by the National Natural Science Foundation of China (Grant no. 41204026) and the Open Research Foundation of Science and Technology on Aerospace Flight Dynamics Laboratory (Grant no. 2012afdl010). The authors would thank Dr. Yanlong BU for his good suggestions and inspirations.

References

  1. J. Wang, M. Garratt, A. Lambert, et al., “Integration of GPS/INS/vision sensors to navigate unmanned aerial vehicles,” in Proceedings of the International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, pp. 963–969, Calgary, Canada, August 2008.
  2. R. Beard, D. Kingston, M. Quigley et al., “Autonomous vehicle technologies for small fixed-wing UAVs,” Journal of Aerospace Computing, Information and Communication, pp. 92–108, 2005. View at Scopus
  3. D. B. Kingston and R. W. Beard, “Real-time attitude and position estimation for small UAVs using low-cost sensors,” in Proceedings of the AIAA 3rd Unmanned Unlimited Technical Conference, Workshop and Exhibit, pp. 489–497, Chicago, Ill, USA, September 2004. View at Scopus
  4. W. Li and J. Wang, “Effective adaptive Kalman filter for MEMS-IMU/magnetometers integrated attitude and heading reference systems,” Journal of Navigation, vol. 66, no. 1, pp. 99–113, 2013. View at Publisher · View at Google Scholar
  5. F. M. Mirzaei and S. I. Roumeliotis, “A Kalman filter-based algorithm for IMU-camera calibration: observability analysis and performance evaluation,” IEEE Transactions on Robotics, vol. 24, no. 5, pp. 1143–1156, 2008. View at Publisher · View at Google Scholar · View at Scopus
  6. J. Zhang, W. Liu, and Y. Wu, “Novel technique for vision-based UAV navigation,” IEEE Transactions on Aerospace and Electronic Systems, vol. 47, no. 4, pp. 2731–2741, 2011. View at Publisher · View at Google Scholar · View at Scopus
  7. A. I. Mourikis and S. I. Roumeliotis, “A multi-state constraint Kalman filter for vision-aided inertial navigation,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '07), pp. 3565–3572, April 2007. View at Publisher · View at Google Scholar · View at Scopus
  8. C. N. Taylor, M. J. Veth, J. F. Raquet, and M. M. Miller, “Comparison of two image and inertial sensor fusion techniques for navigation in unmapped environments,” IEEE Transactions on Aerospace and Electronic Systems, vol. 47, no. 2, pp. 946–958, 2011. View at Publisher · View at Google Scholar · View at Scopus
  9. M. Blösch, S. Weiss, D. Scaramuzza, and R. Siegwart, “Vision based MAV navigation in unknown and unstructured environments,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '10), pp. 21–28, May 2010. View at Publisher · View at Google Scholar · View at Scopus
  10. N. Trawny, A. I. Mourikis, S. I. Roumeliotis, A. E. Johnson, and J. F. Montgomery, “Vision-aided inertial navigation for pin-point landing using observations of mapped landmarks,” Journal of Field Robotics, vol. 24, no. 5, pp. 357–378, 2007. View at Publisher · View at Google Scholar · View at Scopus
  11. L. D. Hostetler and R. D. Andreas, “Nonlinear Kalman filtering techniques for terrain-aided navigation,” IEEE Transactions on Automatic Control, vol. 28, no. 3, pp. 315–323, 1983. View at Publisher · View at Google Scholar · View at Scopus
  12. P.-J. Nordlund and F. Gustafsson, “Marginalized particle filter for accurate and reliable terrain-aided navigation,” IEEE Transactions on Aerospace and Electronic Systems, vol. 45, no. 4, pp. 1385–1399, 2009. View at Publisher · View at Google Scholar · View at Scopus
  13. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004. View at Publisher · View at Google Scholar · View at Scopus
  14. H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features (SURF),” Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346–359, 2008. View at Publisher · View at Google Scholar · View at Scopus
  15. E. Rosten, R. Porter, and T. Drummond, “Faster and better: a machine learning approach to corner detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 1, pp. 105–119, 2009. View at Publisher · View at Google Scholar · View at Scopus
  16. S. Leutenegger, M. Chli, and R. Y. Siegwart, “BRISK: binary robust invariant scalable keypoints,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV '11), pp. 2548–2555, November 2011. View at Publisher · View at Google Scholar · View at Scopus
  17. B. Zitová and J. Flusser, “Image registration methods: a survey,” Image and Vision Computing, vol. 21, no. 11, pp. 977–1000, 2003. View at Publisher · View at Google Scholar · View at Scopus
  18. M. Deshmukh and U. Bhosle, “A survey of image registration,” International Journal of Image Processing, vol. 5, pp. 245–269, 2011.
  19. D.-J. Jwo and C.-N. Lai, “Unscented Kalman filter with nonlinear dynamic process modeling for GPS navigation,” GPS Solutions, vol. 12, no. 4, pp. 249–260, 2008. View at Publisher · View at Google Scholar · View at Scopus
  20. D. Titterton and J. Weston, Strapdown Inertial Navigation Technology, The Institution of Electronical Engineers, Reston, Va, USA, 2nd edition, 2004.
  21. M. G. Petovello, M. E. Cannon, and G. Lachapelle, “Kalman filter reliability analysis using different update strategies,” in Proceedings of the CASI Annual General Meeting, Montreal, Canada, April 2003.
  22. A. H. Mohamed and K. P. Schwarz, “Adaptive Kalman filtering for INS/GPS,” Journal of Geodesy, vol. 73, no. 4, pp. 193–203, 1999. View at Publisher · View at Google Scholar · View at Scopus
  23. B. J. Odelson, M. R. Rajamani, and J. B. Rawlings, “A new autocovariance least-squares method for estimating noise covariances,” Automatica, vol. 42, no. 2, pp. 303–308, 2006. View at Publisher · View at Google Scholar · View at Scopus
  24. T. Y. Um, J. G. Lee, S. T. Park, and C. G. Park, “Noise covariances estimation for systems with bias states,” IEEE Transactions on Aerospace and Electronic Systems, vol. 36, no. 1, pp. 226–233, 2000. View at Publisher · View at Google Scholar · View at Scopus
  25. R. K. Mehra, “On the identification of variances and adaptive Kalman filtering,” IEEE Transactions on Automatic Control, vol. 15, no. 2, pp. 175–184, 1970. View at Publisher · View at Google Scholar · View at Scopus
  26. U. M. Tae Yoon, J. Lee Gyu, S.-T. Park, and C. G. Park, “Noise covariances estimation for systems with bias states,” IEEE Transactions on Aerospace and Electronic Systems, vol. 36, no. 1, pp. 226–233, 2000. View at Publisher · View at Google Scholar · View at Scopus
  27. A. Almagbile, J. Wang, and W. Ding, “Evaluating the performances of adaptive Kalman filter methods in GPS/INS integration,” Journal of Global Positioning Systems, vol. 9, pp. 33–40, 2010.
  28. P. S. Maybeck, Stochastic Models, Estimation, and Control, Elsevier, 1982.