Research Article | Open Access
An Integration Method of Inertial Navigation System and Three-Beam Lidar for the Precision Landing
To ensure the high accuracy, independence, and reliability of the measurement system in the unmanned aerial vehicle (UAV) landing process, an integration method of inertial navigation system (INS) and the three-beam Lidar is proposed. The three beams of Lidar are, respectively, regarded as an independent sensor to integrate with INS according to the conception of multisensor fusion. Simultaneously, the fault-detection and reconstruction method is adopted to enhance the reliability and fault resistance. First the integration method is described. Then the strapdown inertial navigation system (SINS) error model is introduced and the measurement model of SINS/Lidar integrated navigation is deduced under Lidar reference coordinate. The fault-detection and reconstruction method is introduced. Finally, numerical simulation and vehicle test are carried out to demonstrate the validity and utility of the proposed method. The results indicate that the integration can obtain high precision navigation information and the system can effectively distinguish the faults and accomplish the reconstruction to guarantee the normal navigation when one or two beams of the Lidar malfunction.
The UAV plays a significant role in modern war, scientific exploration, and civilian field. In the entire flight process, the landing phase is quite short. However, the UAV suffers a great change in the posture and velocity. The navigation system may be invalid, which will lead to the accident. To achieve the secure and precise landing, a measurement of high precision, good real-time performance, and excellent reliability must be accomplished.
Presently, the navigation information during the UAV landing phase is mainly achieved by INS, global positioning system (GPS), INS/GPS integrated navigation, and computer vision. Inertial navigation technology is an effective independent measurement method that is not easily interfered and screened by electromagnetic waves. The INS can provide comprehensive navigation information, but the output error of the inertial navigation system is accumulated with the passage of navigation time. GPS is capable of high precision absolute positioning and the error does not accumulate with time. However, the satellite navigation system can be easily screened and becomes invalid in extreme environments. Vision sensors feature in small size, light weight, and high measurement accuracy, but the real-time performance is constrained because the image processing needs mount of computation . Doppler Lidar is a novel sensor to measure the velocity autonomously based on the Doppler effect. Lidar has many advantages such as high precision, wide speed range, and good dynamic performance, which is widely used in many fields [2–5].
An integration method of INS and three-beam Lidar is proposed in this paper, considering the respective characteristic of INS and Lidar. The velocity information provided by Lidar is used to integrate with INS to obtain higher precision navigation parameters. With the advantage of multiple measurements provided by three-beam Lidar, the fault-detection and reconstruction module can be established to ensure the high reliability of the system.
2. Integration of INS and the Three-Beam Lidar
2.1. Integration Method
In the integration of INS and the three-beam Lidar, the output of the three-beam Lidar can be used in several ways. If the outputs of the three beams are fused before filter calculating, the fault source of the Lidar cannot be exactly located when the fault occurred. If one single beam velocity output is used with the other two considered as redundancy, the switch between the sensors will induce the discontinuous filtering and influence the subsequent navigation when the single beam is invalid.
Therefore, this paper proposes that the three beams of the Lidar can be regarded as three independent sensors and integrated with the INS according to multiple sensors fusion conception. The fault can be monitored and exactly located. And the superiority of multiple measurements can ensure the normal working of the system. So the stability and consecutiveness of the output can also be guaranteed when certain beams suffer failure.
Federal filter algorithm is adopted to accomplish the data fusion of INS and the three-beam Lidar. INS is designed to be the public reference system. And the velocity difference of INS and the three-beam Lidar is used as the observation. The state vector estimations of the local filters are fused in the global filter and the local filters will not be reset by the global filter. The architecture of the designed federal filter is illustrated in Figure 1.
In Figure 1, , , and are, respectively, the state vector estimation and the error covariance matrix of the three local filters. are the state vector estimation and the error covariance matrix of the global filter.
The integration process can be described as follows. First, the system dynamic model is established based on the INS dynamic model and the Lidar error model. Then the velocity output of INS and Lidar is compared to establish the measurement model. Finally, the federal filter algorithm is applied in the integration of INS and three-beam Lidar and the output state estimation of the global filter is used to compensate the INS to obtain the navigation parameters. In the following section, the dynamic model and the measurement model are elaborated. The principle of the federal filter algorithm can be referenced in literature . The method of fault-detection and reconstruction is formulated in Section 3.
2.2. System Dynamic Model
2.2.1. INS Error Model
The coordinate systems concerned in the INS are illustrated in Figure 2.
In the figure, the coordinate is the earth coordinate and the coordinate is the navigation coordinate. The earth coordinate has a constant rotation angular rate . represents longitude and represents latitude.
The dynamic INS error model has long been established and researched in literature such as those in [7, 8], which can be referenced as follows:where is the attitude error vector, is the velocity error vector in the navigation frame coordinate, is the earth rotation rate in the navigation frame, is the rotation rate from the navigation frame coordinate with respect to the earth frame coordinate, is the accelerometer error, is the attitude matrix from body frame coordinate to navigation frame coordinate, is the gyro measurement error in the body frame, is the latitude position error, is the longitude position error, is the height error, and and represent the radii of curvature along lines of constant longitude and latitude, respectively.
2.2.2. Lidar Error Model
Lidar based on the laser Doppler effect can obtain the velocity information by extracting Doppler shift of the scattered light from moving object. The beams of the Lidar provide the velocity to the ground along the direction of the laser beam. The direction of the Lidar beam is denoted as . According to the Lidar error model in , the output velocity of Lidar is assumed to contain errors of bias, scale factor, and noise. The error model of Lidar can be expressed aswhere is the real output of Lidar, is the ideal output velocity, and , , and stand for the scale factor, bias, and Gaussian white noise.
2.3. Measurement Model
The vehicle velocity provided by Lidar is relative to ground along the direction of the laser beam. The output of INS is the three-axis vehicle velocity defined in the navigation frame coordinate. According to the method proposed in , the output of velocity information from the INS is first transformed from navigation reference coordinates to body reference coordinate. Then the integrated navigation measurement model can be established in the orientation which is defined by the Lidar beam.
The INS output velocity defined in the body reference coordinate could be calculated bywhere is the navigation to body frame transformation matrix. Thus, the velocity error defined in body reference coordinates could be expressed as follows:
The calculated navigation to body frame transformation matrix and identity matrix is assumed as and , respectively. One can obtainThe error of can be written aswhere is the form given byand therefore
With the substitution of (6), (8) into (4), the error of the velocity in body frame can be written asBy doing some matrix manipulations on (9), we haveFrom (10) we can see that the velocity error defined in body frame coordinate could be determined by velocity and attitude errors defined in navigation frame coordinate, which are the states of the SINS error model.
The angles of the laser beam between the -axis, -axis, and -axis of the body reference frame are denoted as , , and . Then the velocity can be expressed aswhich is the INS output velocity along the beam direction. Considering the various errors of the six parameters in (11), namely, the errors , , and from , , and and , , and from , , and , the velocity along the direction of calculated by the INS can be obtained:Further by expanding (12) and neglecting the small second-order items, one obtainsCombining the Lidar error model and substituting (11) into (2), the real output velocity of Lidar can be expressed as
In order to realize the integrated navigation of INS/Lidar, the output of the Lidar velocity is compared to that of the INS velocity in the direction , and the observation of the integrated navigation can be given bySubstitute (13) and (14) into (15), and we obtainEquation (10) can be expanded asThe substitution of (17) into (16) yields
We can find that when the Lidar output velocity is compared with the INS output velocity in the direction of , the velocity observation error can be obtained and determined by the states of the INS error and Lidar error models. Therefore, the measurement model of INS/Lidar integrated navigation is established.
2.4. Integrated Navigation System Equation
In this paper, both the accelerometer and gyroscope errors are considered to be composed of bias (random constant) error and white noise. The state-space system model can be derived from the differential equations of navigation errors. The misalignments are assumed to be constant value after the installation was accomplished. And the bias and scale factor of the Lidar are also assumed to be constant in the process of working after being powered on, which is described as follows:
Combining the INS and Lidar error model, the integrated system linear dynamic model can be given byThe error state vector is composed of the navigation errors and inertial sensor errors and Lidar errors, which is a 20-dimension error states vector established as follows:In the equation, , , and stand for the bias errors of the accelerometers and , , and stand for the bias errors of the gyroscopes. , the system process noise, which is composed of the white noises of the inertial sensors, can be written aswhere , , and are the accelerometers noise and , , and are the gyroscopes noise.
The observation is the velocity difference of the INS output velocity and Lidar output velocity along the beam direction, which can be expressed asThe measurement equation can be given byThe measurement equation matrix could be derived from (18):where is the measurement noise.
3. Fault-Detection and Reconstruction
The fault-tolerant technology is an important method to enhance the reliability and stability of the system. The INS does not need any external information, so it is immune to the external interference and is rather reliable. So we assume that the INS will not suffer failures. However, due to the reflect characteristic of the laser, the signal cannot be effectively extracted from the laser beam in certain environment. So the Lidar may suffer failures such as no output, outliers. The validity of the Lidar measurement should be confirmed.
The system level fault-detection is applied, namely, to monitor the three local filters composed of INS and Lidar. The residual test method introduced in  is adopted. In each local filter, when the Lidar is normal, the time-propagated state estimates and the measurement matrix provide the optimal estimation of the real measurement information; the difference of the estimation of the measurement and the real measurement is the residual. The residual is white noise if the filter is in normal state. The calculation of the residual is illustrated in Figure 3.
With the residual and its variance matrix, the fault-detection function can be established. And the function has a chi-square distribution with m degrees, where m is the degree of the measurement vector. The threshold is determined according to the assumed false-alarm rate. Then the fault-detection function value is compared to the threshold. If the value is greater than the threshold, the fault is believed to be detected in the filter. And if the value is smaller, the filter is believed to work in the normal state. The fault-detection function is expressed as follows:where is the variance matrix of the residual and is the fault-detection function.
The entire fault-detection process of the system can be described. The residual of the three local filters is calculated, respectively. Then the fault-detection function value can be derived. The values are judged. If the fault is detected, the measurement of the filter will be rejected. Otherwise, the estimation state will be fused in the global filter. The process is illustrated in Figure 4.
If the fault is detected, the rest normal filters are used to calculate the estimation state. The isolated local filter executes the time propagating process. And the local filter could be used again when the local filter recovered after the fault disappeared.
In this paper, the distribution coefficient in the fault pattern is taken into consideration. The method in  is adopted. is assumed to be the flag of the th local filter composed of INS and the Lidar. In the normal state, the flag is 1. If the fault is detected, it will be changed to 0. The distribution coefficient of the th local filter is expressed asFrom the formula it can be seen that . The principle of conservation of information in the federal filter is met. Before every filter calculating, the local filter information is reassigned on the basis of the updated distribution coefficient. The fusion algorithm of the federal filter is expressed asFrom the equation, we can see the state estimation and covariance matrix will not influence the fusion if the fault is detected in a certain local filter.
The numerical simulation is carried out to demonstrate the validity of the proposed method. Uniform motion, decelerating, turning, diving, and flare-out are included in the designed track. The total time is 257 s. In fault-detection, the false-alarm rate is set as 0.005. The threshold is set as 7.9. The errors of the IMU and Lidar are set as follows:(i)Gyroscope:(a)bias: 0.01°/h;(b)noise: 0.001°/√h.(ii)Accelerometer:(a)bias: 50 μg;(b)noise: 10 μg/√Hz.(iii)Initial error:(a)position error: 10 m, 10 m, 10 m;(b)velocity error: 0.1 m/s, 0.1 m/s, 0.1 m/s;(c)attitude error: azimuth 0.1°, level: 0.05°, 0.05°.(iv)Lidar. The related parameters of the three beams are set the same.(a)Bias: 0.02 m/s.(b)Scale factor: 0.003.(c)Noise: 0.008 m/s/√Hz.(d)Installation: the angles between the beams are the same. The angle of the beam and -axis of the body frame coordinate is all 30°. The angle between the projections in the body frame plane is all 120°. is the angle bisector of the two projections and is reverse of the third beam projection. The misalignments are all 0.05°.
The simulation results of the integration of the INS and Lidar are illustrated in Figures 5–10. The comparison of the INS and integrated navigation is shown in Figures 5–7. And the results of the integrated navigation are shown alone in Figures 8–10. Figure 5 shows the velocity error of INS and integrated navigation. Figure 6 shows the attitude error of INS and integrated navigation. Figure 7 shows the position error of INS and integrated navigation. Figure 8 shows the velocity error of integrated navigation. Figure 9 shows the attitude error of integrated navigation. Figure 10 shows the position error of integrated navigation.
From the simulation results, we can find that the accuracy can be improved effectively by the integrated navigation compared with INS. And the integration of INS and Lidar can effectively restrain the divergence of velocity error and attitude error in the landing process and provide high accurate velocity information and attitude information, which demonstrates the validity of the integration method. Simultaneously the measurement of Lidar is velocity, so the position error cannot be effectively corrected. Other position sensors can be combined to obtain more accurate position information.
To validate the fault-detection and reconstruction method, one or two beams are assumed to malfunction in the process and the simulation is carried out, respectively. According to the practical condition, we assume that when the beams malfunction the beams cannot provide velocity information and the outputs are all zeroes.
The beam 1 is assumed to suffer failure, which starts from 100 s and lasts 20 s. The simulation results are illustrated in Figures 11–14. Figure 11 shows the fault-detection value. Figure 12 shows the velocity error when the first beam malfunctions. Figure 13 shows the attitude error. Figure 14 shows the position error.
The beam 1 and beam 3 are both assumed to suffer failure, which starts from 100 s and lasts 20 s. The simulation results are illustrated in Figures 15–18. Figure 15 shows the fault-detection value of the three local filters. Figure 16 shows the velocity error when two beams malfunction. Figure 17 shows the attitude error. Figure 18 shows the position error.
From Figures 11 and 15, we can see that the system can detect the fault effectively when one or two beams suffer failures. From Figures 12–14 and 16–18, we can see the system can accomplish the reconstruction when one or two beams suffer failure and provide high precision navigation information.
5. Vehicle Test
A vehicle test was conducted to demonstrate the validity of the method. A fiber optic gyro (FOG) SINS, three-beam Lidar, GPS, and Uninterruptible Power Supply (UPS) were used in the vehicle test. The configuration of the integrated system is shown in Figure 19.
The FOG INS consists of three FOGs with a bias of 0.01 deg/h and three quartz accelerometers with a bias of 50 μg. Its update rate is 100 Hz. Lidar consists of a semiconductor laser with wavelength 1.55 micron, some optical component, and signal processing circuit. The bias of Lidar is 0.01 m/s and scale factor of Lidar is 2000 ppm. Its update rate is 10 Hz. GPS is used as the reference system to calculate positioning error of the INS/Lidar integrated system. The positioning precision of GPS is better than 5 m. And its update rate is 1 Hz.
The FOG SINS is rigidly installed in the middle of the test vehicle through metal bolts and it is nearly in the vehicle’s center of mass. The optical antenna of Lidar is horizontally and firmly fixed with several holding devices attached to the vehicle’s center. No mechanical vibration insulator is used. The GPS antenna is fixed on the top of the vehicle by the adhesive tape. The width of the test vehicle is about 1.7 m. The actual installation is shown in Figures 20 and 21. All the following tests are conducted at temperatures of 18°C–25°C and a humidity of 30%–50%.
In the process of the vehicle maneuvering, the total time is about 6000 s and the maximum speed is about 100 km/h. The total mileage is about 105 km. Trajectory of the vehicle is shown in Figure 22. The integration of the INS and Lidar could provide longtime and high precision navigation. Here, for the method proposed in the paper, the integrated navigation output in the situation that one or two beams malfunction during the vehicle maneuvering is displayed mainly to verify the fault-detection and reconstruction module. The related results of the test are illustrated in Figures 23–26.
Figure 23 shows the fault-detection value and Figure 24 shows the corresponding position error curve when one beam malfunctions during the time from 1105 s to 1120 s. Figure 25 shows the fault-detection value and Figure 26 shows the corresponding position error curve when two beams malfunction during the time from 1630 s to 1655 s. From Figures 23 and 25, we can see the system can detect the fault effectively when one or two beams of the Lidar suffer failure. From Figures 24 and 26, we can see the system can provide high precision navigation information when one or two beams suffer failure. The validity and usability of the method are demonstrated.
In this paper, a method of the integration of INS and three-beam Lidar was proposed for the landing process. The three beams of the Lidar were regarded as three independent sensors to integrate with INS in the multiple sensors fusion conception. Federal filter is used to fuse the data. The SINS/Lidar measurement model is established and the fault-detection and reconstruction module are simultaneously designed. Numerical and vehicle test are conducted and the results showed that the proposed method can realize high precision navigation, and the system can work normally when one or two beams suffer failure with high reliability.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
- M. Derome, A. Plyer, M. Sanfourche et al., “Real-time mobile object detection using stereo,” in Proceedings of the International Conference on Control, Automation, Robotics & Vision, Singapore, December 2014.
- D. F. Pierrottet, F. Amzajerdian, L. Petway, G. D. Hines, and B. Barnes, “Field demonstration of a precision navigation lidar system for space vehicles,” in Proceedings of the AIAA Guidance, Navigation, and Control (GNC) Conference, Boston, Mass, USA, August 2013.
- F. Amzajerdian, L. Petway, G. Hines, B. Barnes, D. Pierrottet, and G. Lockard, “Doppler lidar sensor for precision landing on the Moon and Mars,” in Proceedings of the IEEE Aerospace Conference, pp. 1–7, IEEE, Big Sky, Mont, USA, March 2012.
- X.-L. Sui, S.-H. Zhou, N.-J. Chen et al., “All fiber coherent doppler lidar for space craft safe landing,” in Proceedings of the International Conference on Optoelectronics and Microelectronics (ICOM '12), pp. 198–202, Changchun, China, August 2012.
- V. Roback, A. Bulyshev, F. Amzajerdian et al., “Helicopter flight test of 3D imaging flash LIDAR technology for safe, autonomous, and precise planetary landing,” in Laser Radar Technology and Applications XVIII, 87310H, vol. 8731 of Proceedings of SPIE, p. 20, International Society for Optics and Photonics, June 2013.
- Y. Y. Qin, H. Y. Zhang, and S. H. Wang, Principles of Kalman Filter and Integrated Navigation, Northwestern Polytechnical University, Xian, China, 2012 (Chinese).
- R. M. Rogers, Applied Mathematics in Integrated Navigation Systems, American Institude of Aeronautics and Astronautics, Reston, Va, USA, 2nd edition, 2003.
- R. M. Rogers, “Velocity error representations in inertial navigation system error models,” in Proceedings of the Guidance, Navigation, and Control Conference, AIAA-95-3193-CP, Houston, Tex, USA, June 1995.
- X. Zhang, Z. Lin, and C. Zhang, “Autonomous integrated navigation method based on the strapdown inertial navigation system and Lidar,” Optical Engineering, vol. 53, no. 7, Article ID 074112, 2014.
- F. Yang, C. Cheng, G.-Y. Zhang, and Y.-M. Cheng, “Simulation research of sub-orbital vehicle re-entry flight segment navigation technology,” Journal of Astronautics, vol. 31, no. 3, pp. 729–733, 2010 (Chinese).
Copyright © 2016 Xiaoyue Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.