Journal of Robotics

Journal of Robotics / 2015 / Article
Special Issue

Biologically Inspired Robotics

View this Special Issue

Research Article | Open Access

Volume 2015 |Article ID 251379 | 12 pages | https://doi.org/10.1155/2015/251379

Unmanned Aerial Vehicle Navigation Using Wide-Field Optical Flow and Inertial Sensors

Academic Editor: Liwei Shi
Received16 Jan 2015
Revised12 Mar 2015
Accepted30 Mar 2015
Published01 Oct 2015

Abstract

This paper offers a set of novel navigation techniques that rely on the use of inertial sensors and wide-field optical flow information. The aircraft ground velocity and attitude states are estimated with an Unscented Information Filter (UIF) and are evaluated with respect to two sets of experimental flight data collected from an Unmanned Aerial Vehicle (UAV). Two different formulations are proposed, a full state formulation including velocity and attitude and a simplified formulation which assumes that the lateral and vertical velocity of the aircraft are negligible. An additional state is also considered within each formulation to recover the image distance which can be measured using a laser rangefinder. The results demonstrate that the full state formulation is able to estimate the aircraft ground velocity to within 1.3 m/s of a GPS receiver solution used as reference “truth” and regulate attitude angles within 1.4 degrees standard deviation of error for both sets of flight data.

1. Introduction

Information about the velocity and attitude of an aircraft is important for purposes such as remote sensing [1], navigation, and control [2]. Traditional low-cost aircraft navigation relies on the use of both inertial sensors and Global Positioning System (GPS) [35]. While GPS can provide useful information to an aircraft system, this information is not always available or reliable in certain situations, such as flying in urban environments or other GPS-denied areas (e.g., under radio-frequency jamming or strong solar storm). GPS is not self-contained within the aircraft system; rather the information comes from external satellites. Insects, such as the honeybee, have demonstrated impressive capabilities in flight navigation without receiving external communications [6]. One significant information source that is used by insects as well as birds is vision [68]. This information can also be made available to an aircraft through the use of onboard video cameras. The challenge with this information rich data is correctly processing and integrating the vision data with the other onboard sensor measurements [9].

Vision data can be processed using feature detection algorithms such as the Scale-Invariant Feature Transform (SIFT) [10] to obtain optical flow vectors, as well as other techniques. Optical flow is useful for aircraft systems because it is rich in navigation information, simple to represent, and easy to compute [11]. One of the benefits of this information is that it can be used in order to extract velocity information about the aircraft, which in turn can be used for aircraft positioning. This optical flow information has been used for autonomous navigation applications such as relative heading and lateral position estimation of a quadrotor helicopter [12, 13]. Another work has considered the use of optical flow for UAV take-off and landing [14] and landmark navigation [15]. Another potential benefit of optical flow is that it implicitly contains information about the aircraft attitude angles. This implicit information has been used in related work for UAV attitude estimation using horizon detection and optical flow along the horizon line [16, 17] and pose estimation for a hexacopter [18], a lunar rover [19], and spacecraft [20]. While this work is useful, these vehicles contain significantly different dynamic characteristics than a typical airplane. Due to this, more analysis of the application of optical flow for airplane applications is necessary.

This work presents a combined velocity and attitude estimation algorithm using wide-field optical flow for airplanes that does not require horizon detection, which is useful because the horizon does not need to be visible in the image frame in order to obtain attitude information. The algorithm relies on the optical flow computed using a downward facing video camera, measurements from a laser range finder and an Inertial Measurement Unit (IMU) that are mounted in parallel to the camera axis, and a flat ground assumption to determine information about the aircraft velocity and attitude. Many of the existing experiments for optical flow and inertial sensor fusion are done using helicopter platforms and focus on position and velocity estimation [21, 22]. This work considers an airplane system rather than a helicopter, which contains a significantly different flight envelope and dynamics. Additionally, the regulation of attitude information through the use of optical flow is considered, which is not typically done in existing applications. This work takes advantage of all detected optical flow points in the image plane, including wide-field optical flow points which were often omitted in previous works [2325]. These wide-field optical flow points are of significant importance for attitude estimation, since they contain roll and pitch information that is not observable from the image center. Although this work considers the use of a laser range finder to recover the distance between the image scene and the camera, it is possible to determine this information using other techniques [26]. In fact, it has been demonstrated that the scale is an observable mode for the vision and IMU data fusion problem [27]. The presented formulation was originally offered in its early stages of development in [28]. Since this original publication, the implementation and tuning of the formulation have been refined, and additional results have been generated. In particular, a simplified formulation is offered which reduces the filter states, and the inclusion of a range state is considered. The main contribution of this paper is the analysis of a stable vision-aided solution for the velocity and attitude determination without the use of GPS. This solution is verified with respect to two sets of actual UAV flight testing data.

The rest of this paper is organized as follows. Section 2 presents the different considered formulations and framework for this problem. Section 3 describes the experimental setup which was used to collect data for this study. The results are offered in Section 4 followed by a conclusion in Section 5.

2. Problem Formulation

2.1. Optical Flow Equations

Optical flow is the projection of 3D relative motion into a 2D image plane. Using the pinhole camera model, the 3D position in the 3D camera body frame can be mapped into the 2D image plane with coordinates usingwhere , , and are given in pixels and is the focal length. For a downward looking camera that is parallel to the aircraft -axis, and with a level and flat ground assumption, the optical flow equations have been derived [29]:where , are the roll and pitch angles, , , are the roll, pitch, and yaw body-axis angular rates, , , , are the body-axis ground velocity components of the aircraft, and, are the components of optical flow in the 2D image plane, given in pixels/sec. This equation captures the relationship between optical flow at various parts of the image plane with other pieces of navigation information. By considering only the area close to the image center , the narrow-field optical flow model can be simplified [2325]; however, this removes the roll and pitch dependence of the equation and is therefore not desirable for attitude estimation purposes.

2.2. State Space Formulation and Stochastic Modeling

This work considers the simultaneous estimation of body-axis ground velocity components and Euler attitude angles . This estimation is performed through the fusion of Inertial Measurement Unit (IMU) measurements of body-axis accelerations and angular rates , laser rangefinder range measurements , and sets of optical flow measurements , where . The value of varies with each time step based on how many features in the frame can be used for optical flow calculation. Using these values, the state space model of the system is formulated with the following state vector, , bias state vector, , input vector, , optical flow input vectors, , and output vectors, :A diagram describing the definition of the range coordinate, , is provided in Figure 1. Note that the range coordinate, , is equivalent to the camera coordinate, .

In order to determine the dynamics of the velocity states, the time derivative of the velocity vector observed from the fixed navigation frame is equal to the time rate of change as observed from the moving body axis frame plus the change caused by rotation of the frame [30]:The IMU measures the acceleration with respect to the fixed gravity vector, as inwhere is the rotation matrix from the navigation frame to the body frame:Combining these results gives the dynamics for the velocity states [31]:The dynamics of the attitude states are defined using [32]

To define the dynamics for the bias parameters, a first-order Gauss-Markov noise model was used. In a related work [33], the Allan deviation [34] approach presented in [35, 36] was used to determine the parameters of the first-order Gauss-Markov noise model for the dynamics of the bias on each IMU channel. The Gauss-Markov noise model for each sensor measurement involves two parameters: a time constant and a variance of the wide-band sensor noise. Using this model, the dynamics for the bias parameters are given bywhere is a vector of time constants and n is a zero-mean noise vector with variance given by a diagonal matrix of the variance terms for each sensor. The time constant and variance terms were calculated in [33] for each channel of the same IMU that was considered for this study.

The state dynamic equations have been defined in continuous-time using the following format:where is the nonlinear continuous-time state transition function. In order to implement these equations in a discrete-time filter, a first-order discretization is used [37]:where is the discrete time index, f is the nonlinear discrete-time state transition function, and is the sampling time of the system.

To formulate the observation equations, optical flow information is utilized. In particular, each optical flow point identified from vision data consists of four values:, , , . These values are obtained using a point matching method [38] and the Scale-Invariant Feature Transform (SIFT) algorithm [10]. Note that the method for optical flow generation is not the emphasis of this research [38]; therefore, any other optical flow algorithm can be used similarly within the proposed estimator, without any loss of generality.

During the state estimation process, the image plane coordinates are taken as inputs to the observation equation, allowing the optical flow () to be predicted at that point in the image plane using (2), where is provided by the laser rangefinder measurement, . These computed observables are then compared with the optical flow measurements offrom the video in order to determine how to update the states. Since multiple optical flow points can be identified within a single time step, this creates a set of observation equations, where is the number of optical flow points at time step .

Since (7) and (8) are derived from kinematics, the only uncertainty that must be modeled is due to the input measurements. Therefore, the input vector is given bywhere is the measured input vector and is the vector of sensor biases which follow a first order Gauss-Markov noise model as determined in [33].

The uncertainty in the measurements is due to the errors in the optical flow estimation from the video. It is assumed that each optical flow measurement has an additive measurement noise vector, , with corresponding covariance matrix, . For this study, it is also assumed that each optical flow measurement carries equal uncertainty and that errors along the two component directions of the image plane also have equal uncertainty and are uncorrelated; that is,where is the scalar uncertainty of the optical flow measurements and is a identity matrix.

2.3. Simplified Formulation

The motion of a typical airplane is mostly in the forward direction, that is, the speed of the aircraft is primarily contained in the component, , while and are small. With this idea, assuming that and are zero, the formulation is simplified to the following state vector, , bias state vector, , input vector, , optical flow input vectors, , and output vectors, :Note that this simplified formulation removes the and states which removes the need for -axis and -axis acceleration measurements. Since the yaw state is not contained in any of the state or observation equations it has also been removed. Due to the assumption that and are zero, only the -direction of optical flow is relevant. With these simplifications, the state dynamics becomeThe dynamics of the bias states remain the same as in the full formulation except the corresponding bias states for and have been removed. The observation equations from (2) are simplified to be

The advantage of considering this simplified formulation is primarily to reduce the computational complexity of the system. The processing of vision data leading to a relatively large number of measurement updates can significantly drive up the computation time of the system, particularly for higher sampling rates. This simplified formulation not only reduces the computation time through a reduction of states, but also significantly reduces the processing and update time for optical flow measurements since only the forward component of flow is used. This formulation could be more practical than the full state formulation for real-time implementation, especially on systems which are limited in onboard computational power due, for example, to cost or size constraints.

2.4. Inclusion of a Range State

It is possible to include a state to estimate the range in order to recover the scale of the optical flow images. To determine the dynamics of the range state, the flat ground assumption is used. With this assumption, consider the projection of the range vector onto the Earth-fixed -axis, that is, “down,” as shown in Figure 1, by taking the projection through both the roll and pitch angles of the aircraft:Here, the negative sign is used because the coordinate is always positive, while the coordinate will be negative when the aircraft is above the ground (due to the “down” convention). Taking the derivative with respect to time yieldsCompare this -velocity equation with that obtained from rotating aircraft body velocity components into the Earth-fixed frame:Equating these two expressions for -velocity givesSimplifying this relationship leads toSubstituting in the dynamics for the roll and pitch angles and simplifying leads to the following expression for the range state dynamics:Note that, for level conditions, that is, roll and pitch angles are zero, the equation reduces towhich agrees with physical intuition. In order to implement the range state in the simplified formulation, the following expression can be used:

2.5. Information Fusion Algorithm

Due to the nonlinearity, nonadditive noise and numbers of multiple optical flow measurements ranging from 0 to 300 per frame with a mean of 250, the Unscented Information Filter (UIF) [3941] was selected for the implementation of this algorithm [42]. The advantage of the information filtering framework over Kalman filtering is that redundant information vectors are additive [3941]; therefore, the time-varying number of outputs obtained from optical flow can easily be handled with relatively low computation, since the coupling between the errors in different optical flow measurements is neglected. The UIF algorithm is summarized as follows [41].

Consider a discrete time nonlinear dynamic system of the formwith measurement equations of the formwhere is the observation function and and are the zero-mean Gaussian process and measurement noise vectors. At each time step, sigma-points are generated from the prior distribution usingwhere is the total number of states and is a scaling parameter [42]. Now, the sigma-points are predicted usingwhere denotes the th column of a matrix. The a priori statistics are then recovered:where is the process noise covariance matrix, and and are weight vectors [42]. Using these predicted values, the information vector, , and matrix, , are determined:For each measurement, that is, each optical flow pair, the output equations are evaluated for each sigma-point, as inwhere denotes an output sigma-point and the superscript denotes the th sigma-point and the th measurement. The computed observation is then recovered usingUsing the computed observation, the cross-covariance is calculated:Then the observation sensitivity matrix, , is determined:The information contributions can then be calculated:

3. Experimental Setup

The research platform used for this study is the West Virginia University (WVU) “Red Phastball” UAV, shown in Figure 2, with a custom GPS/INS data logger mounted inside the aircraft [28, 43]. Some details for this aircraft are provided in Table 1.


PropertyValue

Length2.2 m
Wingspan2.4 m
Takeoff mass11 kg
Payload mass3 kg
Propulsion systemDual 90 mm Ducted Fan Motor
Static thrust60 N
Fuel capacityTwo 5 Ah LiPo batteries
Cruise speed30 m/s
Mission duration6 min

The IMU used in this study is an Analog Devices ADIS-16405 MEMS-based IMU, which includes triaxial accelerometers and rate gyroscopes. Each suite of sensors on the IMU is acquired at 18-bit resolution at 50 Hz over ranges of ±18 g’s and ±150 deg/s, respectively. The GPS receiver used in the data logger is a Novatel OEM-V1, which was configured to provide Cartesian position and velocity measurements and solution standard deviations at a rate of 50 Hz, with 1.5 m RMS horizontal position accuracy and 0.03 m/s RMS velocity accuracy. An Optic-Logic RS400 laser range finder was used for range measurement with an approximate accuracy of 1 m and range of 366 m, pointing downward. In addition, a high-quality Goodrich mechanical vertical gyroscope is mounted onboard the UAV to provide pitch and roll measurements to be used as sensor fusion “truth” data, with reported accuracy of within 0.25° of true vertical. The vertical gyroscope measurements were acquired at 16-bit resolution with measurement ranges of ±80 deg for roll and ±60 deg for pitch.

A GoPro Hero video camera is mounted at the center of gravity of the UAV for flight video collection, pointing downwards. The camera was previously calibrated to a focal length of 1141 pixels [29]. Two different sets of flight data were used for this study, each using different camera settings. The first flight used a pixel size of 1920 × 1080 and a sampling rate of 29.97 Hz. The second flight used a pixel size of 1280 × 720 and a sampling rate of 59.94 Hz. All the other sensor data were collected at 50 Hz and resampled to the camera time for postflight validation after manual synchronization.

4. Experimental Results

4.1. Flight Data

Two sets of flight data from the WVU “Red Phastball” aircraft were used in this study. Each flight consists of approximately 5 minutes of flight. The top-down flight trajectories from these two data sets are overlaid on a Google Earth image of the flight test location in Figure 3. Six different unique markers have been placed in Figure 3 in order to identify specific points along the trajectory. These markers will be used in future figures in order to synchronize the presentation of data.

4.2. Selection of Noise Assumptions for Optical Flow Measurements

Since the noise properties of the IMU have been established from previous work [33], only the characteristics of the uncertainty in the laser range and optical flow measurements need to be determined. The uncertainty in the laser range finder measurement is modeled as 1 m zero-mean Gaussian noise, based on the manufacturer’s reported accuracy of the sensor. The optical flow errors are a bit more difficult to model. Due to this difficulty, different assumptions of the optical flow uncertainty were considered. Using both sets of the flight data, the full state UIF was executed for each assumption of optical flow uncertainty. To evaluate the performance of the filter, the speed measurements were compared with reference measurements from GPS which have been mapped into the aircraft frame using roll and pitch measurements from the vertical gyroscope and approximating the yaw from the heading as determined by GPS. The roll and pitch estimates were compared with the measurements from the vertical gyroscope. Due to the possibility of alignment errors, only standard deviation of error was considered. Each of these errors was calculated for each set of flight data, and the results are offered in Figure 4.

Figure 4 shows how changing the assumption on the optical flow uncertainty affects the estimation performance of the total ground speed, roll angle, and pitch angle. The relatively flat region in Figure 4 for assumed optical flow standard deviations from approximately 3 to 9 pixels indicates that this formulation is relatively insensitive to tuning of these optical flow errors. It is also interesting to note in Figure 4 that Flight #1 and Flight #2 have optimum performance at different values of . This however makes sense, as Flight #2 has twice the frame rate as Flight #1; therefore, the assumed noise characteristics should be one half that of Flight #1. From Figure 4, the optical flow uncertainties were selected to be pixels2 for Flight #1 and pixels2 for Flight #2.

4.3. Full State Formulation Estimation Results

Using each set of flight data, the full state formulation using UIF was executed. The estimated components of velocity are shown for Flight #1 in Figure 5 and for Flight #2 in Figure 6. These estimates from the UIF are offered with respect to comparable reference values from GPS, which were mapped into the aircraft frame using roll and pitch measurements from the vertical gyroscope and approximating the yaw angle with the heading angle obtained from GPS. From each of these figures, the following observations can be made. The forward velocity, , is reasonably captured by the estimation. The lateral velocity, , and vertical velocity, , however, demonstrate somewhat poor results. This does however make sense, as the primary direction of flight is forward, thus resulting in good observability characteristics in the optical flow in the forward direction, while the signal-to-noise ratio (SNR) for the lateral and vertical directions remains small for most typical flight conditions. However, since these lateral and vertical components are only a small portion of the total velocity, the total speed can be reasonably approximated by this technique. The total speed estimates are shown in Figure 7 for Flight #1 with GPS reference.

The attitude estimates for the roll and pitch angles are compared with the vertical gyroscope measurements as a reference, as shown in Figure 8. In order to demonstrate the effectiveness of this method in regulating the drift in attitude estimates that occurs with dead reckoning, the estimation errors from the UIF are compared with the errors obtained from dead reckoning attitude estimation. These roll and pitch errors are offered in Figure 9 for Flight #2. Figure 9 demonstrates the effectiveness of the UIF in regulating the attitude errors from dead reckoning.

In order to quantify the estimation results, the mean absolute error and standard deviation of error of the estimates are calculated for the velocity components with respect to the GPS reference and also for the roll and pitch angles with respect to the vertical gyroscope reference. These statistical results are provided in Table 2 for Flight #1 and Table 3 for Flight #2, where is the total airspeed as determined byIt is shown in Tables 2 and 3 that reasonable errors are obtained in both sets of flight data for the velocity and attitude of the aircraft. Larger errors are noted in particular for the lateral velocity state, , which is due to observability issues in the optical flow. Note that mean errors in the roll and pitch estimation could be due to misalignment between the vertical gyroscope, IMU, and video camera. The attitude estimation accuracy is reported in Tables 2 and 3 similar to the reported accuracy of loosely coupled GPS/INS attitude estimation using similar flight data [43].


Estimated stateMean abs.Standard deviation Units

1.06441.2667m/s
2.46992.7719m/s
2.15541.6554m/s
1.24661.2858m/s
1.15531.3668deg
2.17521.3339deg


Estimated stateMean abs.Standard deviation Units

1.12991.2663m/s
1.51841.8649m/s
1.13241.3453m/s
1.20871.2535m/s
1.88181.2782deg
1.66461.1049deg

4.4. Simplified Formulation Estimation Results

Since it was observed in the full state formulation results that the lateral and vertical estimates were small, the simplified formulation was implemented in order to investigate the feasibility of a simplified version of the filter that estimates only the forward velocity component and assumes the lateral and vertical components are zero. The forward velocity, , for Flight #1 is offered in Figure 10, while the roll and pitch errors with respect to the vertical gyroscope measurement are offered in Figure 11 for the UIF and dead reckoning (DR). Additionally, the mean absolute error and standard deviation of error for these terms are provided in Table 4 for Flight #1 and Table 5 for Flight #2.


Estimated stateMean abs.Standard deviation Units

1.22831.5730m/s
1.99012.2922deg
1.90022.1881deg


Estimated stateMean abs.Standard deviation Units

1.30731.5775m/s
2.84023.5426deg
2.02862.4691deg

It is shown in Tables 4 and 5 that the simplified formulation results in significantly higher attitude estimation errors with respect to the full state formulation. These increased attitude errors are likely due to the assumption that lateral and vertical velocity components are zero. To investigate this possible correlation, the roll and pitch errors are shown in Figure 12 with the magnitude of the lateral and vertical velocity as determined from GPS for a 50-second segment of flight data which includes takeoff. Figure 12 shows that there is some correlation between the attitude estimation errors and the lateral and vertical velocity, though it is not the only source of error for these estimates.

4.5. Results Using Range State

The results for each flight for both the full state formulation and simplified formulation were recalculated with the addition of the range state. The statistical results for these tests are offered in Tables 69.


Estimated stateMean abs.Standard deviation Units

1.13301.3699m/s
2.43872.7369m/s
2.12101.5998m/s
1.30411.3852m/s
1.15991.4084deg
2.17671.4064deg


Estimated stateMean abs.Standard deviation Units

1.14441.2937m/s
1.51221.8529m/s
1.11541.3268m/s
1.21121.2761m/s
1.88971.2845deg
1.66091.1152deg


Estimated stateMean abs.Standard deviation Units

1.29191.6256m/s
2.06212.3746deg
1.92772.2713deg


Estimated stateMean abs.Standard deviation Units

1.33561.6016m/s
2.87483.5462deg
2.03032.4876deg

In order to compare the results from the different cases, the standard deviation of error is shown graphically for Flight #1 in Figure 13 and Flight #2 in Figure 14. It is shown in Figures 13 and 14 that the simplified formulation offers poorer estimation performance as expected, particularly for the attitude estimates. The addition of the range state does not affect the performance significantly.

5. Conclusions

This paper presented vision-aided inertial navigation techniques which do not rely upon GPS using UAV flight data. Two different formulations were presented, a full state estimation formulation which captures the aircraft ground velocity vector and attitude and a simplified formulation which assumes all of the aircraft velocity is in the forward direction. Both formulations were shown to be effective in regulating the INS drift. Additionally, a state was included in each formulation in order to estimate the distance between the image center and the aircraft. The full state formulation was shown to be effective in estimating aircraft ground velocity to within 1.3 m/s and regulating attitude angles within 1.4 degrees standard deviation of error for both sets of flight data.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported in part by NASA Grant no. NNX12AM56A, Kansas NASA EPSCoR PDG grant, and SRI grant.

References

  1. C. Li, L. Shen, H.-B. Wang, and T. Lei, “The research on unmanned aerial vehicle remote sensing and its applications,” in Proceedings of the IEEE International Conference on Advanced Computer Control (ICACC '10), pp. 644–647, Shenyang, China, March 2010. View at: Publisher Site | Google Scholar
  2. A. J. Calise and R. T. Rysdyk, “Nonlinear adaptive flight control using neural networks,” IEEE Control Systems Magazine, vol. 18, no. 6, pp. 14–24, 1998. View at: Publisher Site | Google Scholar
  3. M. S. Grewal, L. R. Weill, and A. P. Andrew, Global Positioning, Inertial Navigation & Integration, Wiley, New York, NY, USA, 2nd edition, 2007.
  4. J. L. Crassídís, “Sigma-point Kalman filtering for integrated GPS and inertial navigation,” in Proceedings of the AIAA Guidance, Navigation, and Control Conference, pp. 1981–2004, San Francisco, Calif, USA, August 2005. View at: Google Scholar
  5. J. N. Gross, Y. Gu, M. B. Rhudy, S. Gururajan, and M. R. Napolitano, “Flight-test evaluation of sensor fusion algorithms for attitude estimation,” IEEE Transactions on Aerospace and Electronic Systems, vol. 48, no. 3, pp. 2128–2139, 2012. View at: Publisher Site | Google Scholar
  6. M. V. Srinivasan, “Honeybees as a model for the study of visually guided flight, navigation, and biologically inspired robotics,” Physiological Reviews, vol. 91, no. 2, pp. 413–460, 2011. View at: Publisher Site | Google Scholar
  7. P. S. Bhagavatula, C. Claudianos, M. R. Ibbotson, and M. V. Srinivasan, “Optic flow cues guide flight in birds,” Current Biology, vol. 21, no. 21, pp. 1794–1799, 2011. View at: Publisher Site | Google Scholar
  8. N. Franceschini, “Visual guidance based on optic flow: a biorobotic approach,” Journal of Physiology Paris, vol. 98, no. 1-3, pp. 281–292, 2004. View at: Publisher Site | Google Scholar
  9. P. Corke, J. Lobo, and J. Dias, “An introduction to inertial and visual sensing,” The International Journal of Robotics Research, vol. 26, no. 6, pp. 519–535, 2007. View at: Publisher Site | Google Scholar
  10. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004. View at: Publisher Site | Google Scholar
  11. M. V. Srinivasan, S. Thurrowgood, and D. Soccol, “Competent vision and navigation systems,” IEEE Robotics & Automation Magazine, vol. 16, no. 3, pp. 59–71, 2009. View at: Publisher Site | Google Scholar
  12. J. Conroy, G. Gremillion, B. Ranganathan, and J. S. Humbert, “Implementation of wide-field integration of optic flow for autonomous quadrotor navigation,” Autonomous Robots, vol. 27, no. 3, pp. 189–198, 2009. View at: Publisher Site | Google Scholar
  13. F. Kendoul, I. Fantoni, and K. Nonami, “Optic flow-based vision system for autonomous 3D localization and control of small aerial vehicles,” Robotics and Autonomous Systems, vol. 57, no. 6-7, pp. 591–602, 2009. View at: Publisher Site | Google Scholar
  14. A. Beyeler, J.-C. Zufferey, and D. Floreano, “optiPilot: control of take-off and landing using optic flow,” in Proceedings of the European Micro Air Vehicle Conference and Competition (EMAV '09), Delft, The Netherland, September 2009. View at: Google Scholar
  15. M. M. Veth and J. Raquet, “Fusion of low-cost imaging and inertial sensors for navigation,” in Proceedings of the 19th International Technical Meeting of the Satellite Division Institute of Navigation (ION GNSS '06), pp. 1093–1103, Fort Worth, Tex, USA, September 2006. View at: Google Scholar
  16. D. Dusha, W. Boles, and R. Walker, “Fixed-wing attitude estimation using computer vision based horizon detection,” in Proceedings of the International Unmanned Air Vehicle Systems Conference, April 2007. View at: Google Scholar
  17. D. Dusha, W. Boles, and R. Walker, “Attitude estimation for a fixed-wing aircraft using horizon detection and optical flow,” in Proceedings of the 9th Biennial Conference of the Australian Pattern Recognition Society on Digital Image Computing Techniques and Applications (DICTA '07), pp. 485–492, IEEE, Glenelg, Australia, December 2007. View at: Publisher Site | Google Scholar
  18. S. Weiss, M. W. Achtelik, S. Lynen, M. Chli, and R. Siegwart, “Real-time onboard visual-inertial state estimation and self-calibration of MAVs in unknown environments,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '12), May 2012. View at: Google Scholar
  19. M. Dille, B. Grocholsky, and S. Singh, “Outdoor downward-facing optical flow odometry with commodity sensors,” in Field and Service Robotics: Results of the 7th International Conference, vol. 62 of Springer Tracts in Advanced Robotics, pp. 183–193, 2010. View at: Publisher Site | Google Scholar
  20. S. I. Roumeliotis, A. E. Johnson, and J. F. Montgomery, “Augmenting inertial navigation with image-based motion estimation,” in Proceedings of the IEEE International Conference on Robotics & Automation, pp. 4326–4333, Washington, DC, USA, May 2002. View at: Google Scholar
  21. M. Achtelik, M. Achtelik, S. Weiss, and R. Siegwart, “Onboard IMU and monocular vision based control for MAVs in unknown in- and outdoor environments,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '11), pp. 3056–3063, Shanghai, China, May 2011. View at: Publisher Site | Google Scholar
  22. P.-J. Bristeau, F. Callou, D. Vissière, and N. Petit, “The navigation and control technology inside the AR.Drone micro UAV,” in Proceedings of the 18th IFAC World Congress, pp. 1477–1484, Milano, Italy, September 2011. View at: Publisher Site | Google Scholar
  23. H. Chao, Y. Gu, and M. Napolitano, “A survey of optical flow techniques for robotics navigation applications,” Journal of Intelligent and Robotic Systems: Theory and Applications, vol. 73, no. 1–4, pp. 361–372, 2014. View at: Publisher Site | Google Scholar
  24. W. Ding, J. Wang, S. Han et al., “Adding optical flow into the GPS/INS integration for UAV navigation,” in Proceedings of the International Global Navigation Satellite Systems Society IGNSS Symposium, Surfers Paradise, Australia, 2009. View at: Google Scholar
  25. H. Romero, S. Salazar, and R. Lozano, “Real-time stabilization of an eight-rotor UAV using optical flow,” IEEE Transactions on Systems, Man, and Cybernetics—Part C: Applications and Reviews, vol. 42, no. 6, pp. 1752–1762, 2012. View at: Google Scholar
  26. V. Grabe, H. H. Bulthoff, and P. R. Giordano, “A comparison of scale estimation schemes for a quadrotor UAV based on optical flow and IMU measurements,” in Proceedings of the 26th IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '13), pp. 5193–5200, IEEE, Tokyo, Japan, November 2013. View at: Publisher Site | Google Scholar
  27. A. Martinelli, “Vision and IMU data fusion: closed-form solutions for attitude, speed, absolute scale, and bias determination,” IEEE Transactions on Robotics, vol. 28, no. 1, pp. 44–60, 2012. View at: Publisher Site | Google Scholar
  28. M. B. Rhudy, H. Chao, and Y. Gu, “Wide-field optical flow aided inertial navigation for unmanned aerial vehicles,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '14), pp. 674–679, Chicago, Ill, USA, September 2014. View at: Publisher Site | Google Scholar
  29. H. Chao, Y. Gu, J. Gross, G. Guo, M. L. Fravolini, and M. R. Napolitano, “A comparative study of optical flow and traditional sensors in UAV navigation,” in Proceedings of the 1st American Control Conference (ACC '13), pp. 3858–3863, Washington, DC, USA, June 2013. View at: Google Scholar
  30. R. C. Hibbeler, Engineering Mechanics: Dynamics, Prentice Hall, 10th edition, 2004.
  31. V. Klein and E. A. Morelli, Aircraft System Identification: Theory and Practice, American Institute of Aeronautics and Astronautics, Reston, Va, USA, 2006.
  32. J. Roskam, Airplane Flight Dynamics and Automatic Flight Controls, DARcorporation, Lawrence, Kan, USA, 2003.
  33. J. N. Gross, Y. Gu, M. Rhudy, F. J. Barchesky, and M. R. Napolitano, “On-line modeling and calibration of low-cost navigation sensors,” in Proceedings of the AIAA Modeling and Simulation Technologies Conference, pp. 298–311, Portland, Ore, USA, August 2011. View at: Google Scholar
  34. D. W. Allan, “Statistics of atomic frequency standards,” Proceedings of the IEEE, vol. 54, no. 2, pp. 221–230, 1966. View at: Publisher Site | Google Scholar
  35. X. Zhiqiang and D. Gebre-Egziabher, “Modeling and bounding low cost inertial sensor errors,” in Proceedings of the IEEE/ION Position, Location and Navigation Symposium (PLANS '08), pp. 1122–1132, IEEE, Monterey, Calif, USA, May 2008. View at: Publisher Site | Google Scholar
  36. Z. Xing, Over-bounding integrated INS/GNSS output errors [Ph.D. dissertation], The University of Minnesota, Minneapolis, Minn, USA, 2010.
  37. F. L. Lewis and V. L. Syrmos, Optimal Control, Wiley, New York, NY, USA, 2nd edition, 1995.
  38. M. Mammarella, G. Campa, M. L. Fravolini, Y. Gu, B. Seanor, and M. R. Napolitano, “A comparison of optical flow algorithms for real time aircraft guidance and navigation,” in Proceedings of the AIAA Guidance, Navigation and Control Conference and Exhibit, Honolulu, Hawaii, USA, August 2008. View at: Google Scholar
  39. A. G. O. Mutambara, Decentralized Estimation and Control for Multisensor Systems, CRC Press, Washington, DC, USA, 1998.
  40. T. Vercauteren and X. Wang, “Decentralized sigma-point information filters for target tracking in collaborative sensor networks,” IEEE Transactions on Signal Processing, vol. 53, no. 8, part 2, pp. 2997–3009, 2005. View at: Publisher Site | Google Scholar | MathSciNet
  41. D.-J. Lee, “Unscented information filtering for distributed estimation and multiple sensor fusion,” in Proceedings of the AIAA Guidance, Navigation and Control Conference and Exhibit, Honolulu, Hawaii, USA, 2008. View at: Google Scholar
  42. M. Rhudy and Y. Gu, “Understanding nonlinear Kalman filters,” in Interactive Robotics Letters, West Virginia University, 2013, http://www2.statler.wvu.edu/~irl/page13.html. View at: Google Scholar
  43. M. Rhudy, J. Gross, Y. Gu, and M. R. Napolitano, “Fusion of GPS and redundant IMU data for attitude estimation,” in Proceedings of the AIAA Guidance, Navigation, and Control Conference, Minneapolis, Minn, USA, August 2012. View at: Google Scholar

Copyright © 2015 Matthew B. Rhudy et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

2532 Views | 1230 Downloads | 9 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19.