Research Article  Open Access
Autonomous Landing of a Micro Aerial Vehicle on a Moving Platform Using a Composite Landmark
Abstract
In the existing visionbased autonomous landing systems for micro aerial vehicles (MAVs) on moving platforms, the limited range of landmark localization, the unknown measurement bias of the moving platform (such as wheelslip or inaccurate calibration of encoders), and landing trajectory knotting seriously affect system performance. To overcome the above shortcomings, an autonomous landing system using a composite landmark is proposed in this paper. In the proposed system, a notched ring landmark and twodimensional landmark are combined as an R2D landmark to provide visual localization over a wide range. In addition, the wheelslip and imprecise calibration of encoders are modeled as the unknown measurement bias of the encoders and estimated online via an extended Kalman filter. The landing trajectory is planned by a solver as a convex quadratic programming problem in each control cycle. Meanwhile, an iterative algorithm for adding equality constraints is proposed and used to verify whether the planned trajectory is feasible or not. The simulation and actual landing experiment results verify the following: the visual localization with the R2D landmark has the advantages of wide localization range and high localization accuracy, the pose estimation result of the moving platform with unknown encoder measurement bias is continuous and accurate, and the proposed landing trajectory planning algorithm provides a continuous trajectory for reliable landing.
1. Introduction
Micro aerial vehicles (MAVs) are highly agile and versatile flying robots. Recent work has demonstrated their capabilities in many different applications including but not limited to surveillance, object transportation, agriculture, and aerial photography. This work is focused on the specific task of a fully autonomous landing on a moving platform. Some work has focused on autonomous landing on stationary platforms (SPL systems) such as reference [1], presenting a Global Positioning System (GPS) based landing system, which used the GPS position of the landing zone to guide a MAV to land. Several similar systems are presented in references [2–4]. Visionbased landing for MAVs has been an actively studied field in recent years [5]. Some examples are the work presented in [6], where the visual system is used to estimate a vehicle position relative to a ring landmark. Reference [7] presents a method for MAV autonomous takeoff, tracking, and landing on a 2D landmark array. Compared with the SPL systems, it is meaningful to research the moving platform landing systems (MPL systems), which are more applicable and can use vehicles or ships as landing targets. Generally, the visionbased MPL systems include the following three features. (1)2D landmark localization: to detect the landing zone, most stateofthe art works exploit computer vision from onboard cameras. In those studies, 2D landmark localization is the most common approach for providing camera position relative to landing zone. Generally, the 2D landmark is arranged on the top of the moving platform, so its size is often strictly limited. To provide a large range of localization data under this limitation, researchers have adopted landmarks such as 2D codes, ring landmarks, or character landmarks [8–10]. Although interesting results have been achieved, they are not necessarily applicable to dynamically moving targets in an open outdoor environment(2)Pose estimation of the moving platform: it is difficult to guarantee that the moving platform will be visible throughout the entire duration of the landing. To address missing visual information, state estimation and multisensor fusion methods have been introduced to predict the motion of the moving platform [11, 12]. The most commonly used scheme is fusion of the encoder data and 2D landmark data [13]. Alternative solutions are realized with the use of additional sensors attached to the moving target; these sensors include inertial measurement units [14], GPS [15] receivers, or infrared markers [16]. For reliable landing on a moving platform, it is very necessary to consider wheelslip or obstruction of the moving platform during its movement. This is by no means standard in the literature, since all the MPL systems mentioned before directly estimate the state of the moving platform without considering the measurement bias of the sensors(3)Online landing trajectory planning: for the MAV to be truly autonomous, the landing trajectory planning must be performed on the onboard processor in real time [17]. In fact, the desired landing trajectory must not only pass all the waypoints accurately and continuously but also reduce energy consumption and meet the dynamic constraints of the MAV. Therefore, optimal control theory is introduced to solve the quadratic programming (QP) problem of landing trajectory planning. Examples include minimum snap [18], minimum time [19], or shortest path under uncertain conditions [20]. A common property of these methods is that the planned trajectory may be nonconvex or even knotted when the number of waypoint constraints is too small. Furthermore, although the algorithm with inequality constraints can obtain better trajectories, the computational complexity will increase significantly [21]. Therefore, those algorithms always rely on external computation for trajectory planning
In this paper, a MAV system capable of autonomously landing on a moving target using mainly onboard sensing and computing is presented. The only prior knowledge about the moving landing target is its realtime measurement of the encoders. The proposed R2DMPL system uses a composite landmark comprising a notch ring (NR) landmark and 2D landmark to provide accurate visual localization data on the moving platform over a wide spatial range. To deal with temporarily missing visual information, the unknown measurement bias of encoders caused by wheelslip and imprecise calibration is taken into consideration in the target’s dynamical model, and the state of the moving platform is estimated online based on an extended Kalman filter (EKF). Meanwhile, the R2DMPL system computes trajectories based on the energy necessary to execute an efficient landing, which takes into account the dynamic constraints of the MAV and optimizes the equality constraints of waypoints through an iterative algorithm. The proposed system is validated by simulation as well as in realworld experiments using lowcost, lightweight consumer hardware. The experimental results show that the approach can realize the autonomous and reliable landing of the MAV when the encoders have measurement bias.
The rest of this paper is organized as follows. Section 2 presents the overview of the proposed R2DMPL system. Section 3 introduces the monocular localization algorithm based on the proposed R2D landmark. Section 4 introduces the pose estimation algorithm based on the extended Kalman filter. Section 5 introduces the landing trajectory planning algorithm based on the minimum jerk rule. Section 6 introduces the design of the hardware and experimental results. The experimental results of the proposed visual localization algorithm are given in Section 6.1, the result of the simulation landing experiment is given in Section 6.2, and the result of the actual landing experiment is given in Section 6.3. Section 7 presents the conclusions of this paper.
2. Overview of the R2DMPL System
Landing on a moving platform is more complex than landing on a stationary platform. Recognition errors for a 2D landmark, inaccurate motion models, and maneuvering by moving platforms will seriously affect landing results. To overcome the above problems, a general and hierarchical R2DMPL system is proposed in this paper, which is shown in Figure 1.
The R2DMPL system shown in Figure 1 contains the following four layers. The sensor layer consists of a variety of heterogeneous sensors, including the R2D landmark, encoder, inertial measurement unit (IMU), and GPS. The sensor layer provides the measurements of the moving platform and MAV. The fusion layer uses the measurement data from the sensor layer to obtain the realtime pose estimation of the MAV and the moving platform. In addition, the proposed system can further introduce other sensors such as a laser range finder, UWB, or visual odometer to improve the estimation performance. The decision layer generates the desired waypoint and trajectory online based on the realtime pose estimation from the data fusion layer. Finally, the control layer realizes the visual tracking of the moving platform and the pose control of the MAV.
3. Visual Localization Method Based on a Composite R2D Landmark
Ring landmarks and 2D landmarks are the most commonly used landmarks in visual localization systems. A ring landmark has the advantage of providing good estimation results from larger distances since it uses the projection information of the whole contour. However, it has the problems of a lack of freedom and singular solution. A 2D landmark allows estimation of the camera’s 6Dof pose, but its localization accuracy is affected by the vertex detection result; therefore, it is only suitable for closerange localization. To accurately estimate the MAV’s pose, a composite R2D landmark is designed and placed on the top of the moving platform. The R2D landmark comprises a notched ring (NR) landmark and a 2D landmark; thus, the R2D landmark has the potential to realize precise and continuous localization in a wide range space. As shown in Figure 2, the NR landmark has a notch with outer ring diameter and inner ring diameter ; the 2D landmark is located at the center of the NR landmark, and its size is .
For the R2D landmark recognition, the original image is processed by the 66 template Gaussian filter and adaptive binary algorithm. The serial number and the corners’ 2D3D matching relationship are verified based on the coding information in the contours. Then, the ring contour is segmented sampled and fitted to find the inner and outer ring projections. Finally, the notch is identified with the binary image established by combining the ring contour with the ellipse projection. After recognizing the R2D landmark, the 5Dof pose of the camera can be calculated with the outer ring projection. Furthermore, the complete 6Dof pose can be calculated by considering the pixel position of the notch. First, the outer ring ellipse projection is transformed to the camera coordinate system, and the ellipse parameter equation can be given as follows.
The elliptical oblique cone [22] projection matrix can be calculated as follows. where is the camera focal length. , , and are the eigenvalues of , and and are the feature vectors of and , respectively. Assuming , the position and attitude of the ellipse plane in the canonical camera coordinate system can be calculated as follows.
In equations (3) and (4), the vector represents the pitch, roll, and yaw angles of the ellipse plane, and represents the location of the ellipse center in the canonical camera coordinate system. Because the coefficients , and are unknown, only a 5Dof singular pose (including the 2Dof rotation matrix and the 3Dof transform vector ) of the camera can be obtained.
To obtain the nonsingular solutions, the following two kinds of constraints are added. First, two simple constraints ( and ) can be established based on the fact that the NR landmark is in front of the camera and its axis faces the camera. Second, assuming that the slope of the NR landmark can be ignored, the pitch and roll angles calculated based on the rotation matrix should be close to the results provided by the IMU or 2D landmark. Therefore, define as the regular 5Dof pose solution provided by the NR landmark, which can be calculated based on equations (5) and (6) and the above constraints. The last Dof is approximately calculated based on the position of the NR landmark center pixel and the notch center pixel. Thus, the 6Dof pose provided by the NR landmark is given as follows. where is the NR landmark center pixel and is the notch center pixel. To simplify, is converted to the 3D position and Euler angles in the Euler coordinate system . Define as the 6Dof camera pose in , and it can be calculated as follows.
Ideally, the 2D landmark and NR landmark can simultaneously provide the 6Dof camera pose. The localization of the NR landmark is obtained based on equations (1), (2), (3), (4), (5), (6), (7), (8), and (9). The localization of the 2D landmark is obtained based on the commonly used EPNP algorithm [23], which is fast and simple. However, the EPNP algorithm is seriously affected by the accuracy of the corners’ 2D3D matching relationship; thus, the 2D landmark is only suitable for closerange localization. To synthesize the localization results of the two landmarks, a weighted fusion algorithm based on the corners’ projection error of the 2D landmark is proposed in this paper, and the 6Dof camera pose provided by the R2D landmark in is given as follows. where is the 6Dof camera pose provided by the 2D landmark, is the average projection error of the 2D landmark’s corners, and is the maximum projection error.
4. Pose Estimation of the Moving Platform
Pose estimation of the moving platform can improve the robustness of the MPL system since the visual localization results are not always available. To obtain continuous and accurate pose estimation of the moving platform, a precise motion model and appropriate multisensor fusion algorithm are necessary. In this section, a general motion model for the MPL system is proposed, which considers the unknown encoder measurement bias. Meanwhile, the precise and continuous position, velocity, and orientation of the moving platform are estimated online based on the extended Kalman filter (EKF). Here, a threewheeled omnidirectional car is used as the moving platform, and its coordinate system is shown in Figure 3.
As seen from Figure 3, the body coordinate system of the moving platform is defined as , its origin is located at the center of the moving platform, and the axis points to the head. The axis and axis accord with the righthand rule. The inverse kinematics model [11] of the threewheeled omnidirectional car is given as follows. where is the radius of the moving platform, its height is , is the linear velocity of each wheel, and are the linear velocity in , is the angular velocity in , and is the kinematics transformation matrix. The global coordinate system is defined as , the 2D position of the moving platform in is defined as , and its head orientation is represented as . Thus, the 2D motion model of the moving platform can be described as follows.
Based on equations (12), (13), and (14), the pose estimation of the moving platform can be achieved via the extended Kalman filter (EKF) methods. However, the cumulative error is disappointing due to the unsatisfied calibration of the encoder. Generally, the moving platform commonly drives on rough ground, causing the system to face the problem of wheelslip. For example, when the moving platform passes over the road with a gap or the moving platform is blocked by obstacles, the encoder measurement definitely contains bias. The encoder measurement without calibration will cause incorrect pose estimation. To solve this problem, the unknown measurement bias of the encoder is considered in the motion model. The state vector of the moving platform is defined as including the position and orientation in , the linear velocity and angular velocity in , and the measurement bias of each encoder. In addition, considering the body linear velocity in the state vector has the following advantages: (1)It makes the proposed R2DMPL system more general to introduce other sensors to measure . For example, a gyroscope can be added to measure the angular velocity, and an optical flow sensor can be added to measure the linear velocity of the moving platform(2)It makes the proposed R2DMPL system more general to handle the transmission delay of encoder data with the ring queue and timestamp methods [24]
A nonlinear function is used to describe the motion model of the moving platform in this paper. where is the sampling period, is the process noise, which is described as Gaussian noise, and its covariance matrix is . Thus, the predicted value of at the moment is given as below:
Define − as the covariance matrix of the state estimation; it can be calculated as follows, where is the Jacobian matrix of .
The R2D landmark localization result and the encoder data are utilized to perform the measurement update here.
Define as the camera pose in is the estimated MAV pose in , which is calculated based on the fusion algorithm proposed in reference [7], and thus, the moving platform pose in can be calculated as follows. where . Define the encoder measurement vector as , which is augmented with to obtain the final measurement vector . The nonlinear function between the measurement vector and is given as follows. where is the Gaussian measurement noise, with the covariance matrix being . The measurement prediction of is given as follows.
The residual vector and its approximate covariance matrix are given as follows, where is the Jacobian matrix of .
The Kalman filter gain can be calculated as follows.
Finally, the state estimation and covariance matrix of the moving platform are given as follows. where is the unit matrix. Above all, the realtime pose estimation of the moving platform can be realized based on equations (11), (12), (13), (14), (15), (16), (17), (18), (20), (21), (22), (23), (24), and (25), which can be used as the feedback data in the decision layer and control layer.
5. Landing Trajectory Planning Algorithm Based on the Minimum Jerk Rule
To achieve the precise, continuous, and minimumenergy cost landing process, an online twostage landing trajectory planning algorithm based on the minimum jerk rule is proposed in this section, which includes the visual tracking stage and planning landing stage. In the visual tracking stage, the MAV approaches the circular boundary based on the pixel position of the moving platform. In the planning landing stage, the realtime waypoint planning based on the pose change of the moving platform is performed in each control period. Meanwhile, a continuous and accurate trajectory passing through the waypoint is generated based on the minimum energy criterion. The landing process of the proposed system is shown in Figure 4, in which the black dotted line is the moving platform’s trajectory. The black arrowhead line is the MAV’s trajectory during the visual tracking stage, the red arrowhead line is the MAV’s trajectory during planning of the landing stage, and the blue arrows are the moving platform’s velocity vector at each moment.
5.1. Visual Tracking Stage
The visual tracking method is utilized to guide the MAV to approach the boundary (a circle with a radius of and centered at the center of the moving platform). To obtain the stationary image, a twoaxis pan is installed in front of the MAV in the proposed system. The axis pixel error between the R2D landmark central pixel and the camera center is used to control the heading of the MAV, and the axis pixel error is used to control the pitch angle of the pan. During this stage, the MAV remains at a fixed altitude and fixed speed to approach the moving platform until their distance is sufficiently close. Define as the distance between the MAV and the moving platform, which can be calculated as follows. where is the estimated position of the MAV in , is the estimated position of the moving platform in , is the pitch angle of the pan, and is the weighted parameter. Thus, the landing process will switch to the planning landing stage once is smaller than .
5.2. Planning Landing Stage
As shown in Figure 4, the landing strategy of the proposed algorithm is to chase the moving platform, approaching it from behind, and finally land. Therefore, to obtain a dynamic landing trajectory that meets the above requirements, waypoints and are replanned in each control cycle. The waypoint is selected as a point on the reverse extension line of the velocity vector , which ensures that the MAV approaches the moving platform from behind. Define the radius of as , and its center is located at the center of the moving platform; thus, the parameters of waypoint are given as follows: where is the estimated linear and angular velocity of the moving platform in , is its firstorder differential, is the flight altitude during this stage, and are parameters ensuring that the MAV can automatically change altitude during turning or maneuvering of the moving platform.
The waypoint is the final landing point. At each control cycle, the planner selects prediction times by uniformly sampling a fixedduration prediction horizon. For each time , the planner predicts the future state that the moving platform will reach, which is calculated using its dynamical model starting from its last estimate available from the estimator designed in Section 4. The is used as the candidate final waypoint for each candidate trajectory. Out of all candidate trajectories, the one requiring a minimum amount of energy [12] for execution is selected; meanwhile, the waypoint can be determined.
Above all, to generate a smooth and minimumenergy landing trajectory, an online trajectory planning algorithm is proposed here. For simplification, the three dimensions of the trajectory are planned independently. The fifthorder polynomial is used to describe the landing trajectory with the polynomial parameter vector . The minimum jerk trajectory is planned at each control period, and the optimal function is given as follows.
Several equality constraints can be established considering the waypoint constraints and the continuity of the position, velocity, and acceleration at the waypoint . Taking the axis as an example, its constraint equation is given as follows. where is the estimated linear velocity of the moving platform in , is the time that the MAV reaches the waypoint , and is the time that the MAV reaches the waypoint ; they are determined by the distance and the maximum velocity constraint of the MAV. As shown in equation (28), the trajectory first starts at the MAV’s current position ; meanwhile, its velocity and acceleration should be equal to the velocity and the acceleration of the MAV. Then, the middle point of the trajectory needs to pass the waypoint , and the MAV needs to accelerate to ensure its velocity approaches . Finally, the trajectory ends with and has the same velocity as the moving platform. The above equality constraints can only ensure that the trajectory passes through the desired waypoint, although the planned trajectory may exceed the feasible corridor or physical actuation constraints of the MAV. The above problems can be solved by using inequality constraints , but the large amount of computation required cannot meet the requirements of realtime planning. Furthermore, an iterative algorithm for adding equality constraints (IAEC) is proposed and used to verify whether the planned trajectory is feasible or not.
Therefore, the proposed IAEC algorithm first plans the landing trajectory based on equation (28) and equation (29). Then, it will look for points in the trajectory that exceed the feasible corridor or physical actuation constraints of the MAV, new equality constraints will be added at those points, and the trajectory planning is carried out again. Finally, the expected trajectory is obtained by repeating the above process until the iteration condition is reached, and the convex quadratic programming (QP) problem in equation (28) is solved by the OOQP package [25]. The proposed IAEC method is given in Algorithm 1.

6. Experiment and Analysis
To verify the proposed R2DMPL system, an X450 quadrotor is used in this section. The quadrotor uses an 8inch propeller and four 980 kV brushless motors as the drive system. Its weight is 1.35 kg, and the flight time is 15 minutes. The proposed R2D landmark localization algorithm, pose estimation algorithm, and trajectory planning algorithm are tested on the onboard ODROIDXU4 processor. The height of the threewheeled omnidirectional car is 15 cm, and its radius is 42 cm. The R2D landmark parameters is 70 cm, is 60 cm, and is 54 cm. The communication link between the quadrotor and the moving platform is a 2.4 G wireless radio.
6.1. R2D Landmark Localization Experiment
The localization performance of the R2D landmark is verified, and the result is shown in Figure 5. The blue dotted line in Figure 5(a) is the localization result for the 2D landmark. It can be seen that the 2D landmark cannot be recognized when the axis of the camera exceeds 3 m. The red dotted line is the localization result based on the NR landmark. It can be seen that the NR landmark can only be recognized in the range of 1.5 m to 5 m. The black line is the localization result based on the R2D landmark result, and it can obtain the continuous localization result in the range of 0.5 m to 6 m through fusing the results of the 2D landmark and NR landmark. Figure 5(b) shows the localization error of each landmark; it can be concluded that the proposed algorithm effectively integrates the results of the two sublandmarks and realizes continuous and complete localization.
(a) Localization result
(b) Localization error
In addition, an experiment comparing the localization performance of 2D, NR, and R2D landmarks with the same size is proposed, the results of which are given in Table 1. In this experiment, the outermost width of each landmark’s contour is taken as its size. It can be seen from Table 1 that the R2D landmark has a larger localization range and higher localization accuracy under the same size.

6.2. Moving Platform Pose Estimation Experiment
To verify the pose estimation performance of the proposed system (method 1), the following simulation experiments are designed. The Gazebo and ROS simulation software are used to build the experimental environment. The quadrotor hovers at the coordinate point (0,0,3). Meanwhile, the moving platform runs along a circular trajectory centered at (0,3,0) and radius of 3 m at a speed of 1 m/s. In addition, the pose estimation algorithm (method 2) proposed in reference [12] is adopted for comparison, which also utilizes the EKF method but does not consider the encoder measurement bias. The simulation results are given in Figure 6.
(a) Position estimate result
(b) Velocity estimate result
Figure 6(a) shows the position estimate results of the moving platform. The red line is the ground truth of the moving platform, the green dots are the visual measurement results, the blue dotted line is the estimated result for method 2, and the black dashdot line is the result for method 1. It can be seen that the result of method 2 has a large deviation since it does not consider the encoder measurement bias. Figure 6(b) shows the velocity estimate results for the moving platform. The red line is the true velocity of the moving platform. The blue dotted line is the estimated result for method 2, which has the obvious deviation. The black solid line is the estimated result for method 1, which obtains a satisfying estimation result.
In addition, the position estimate error is shown in Figure 7. As seen from this figure, the position estimate error of the proposed algorithm converges rapidly, and the position estimate error of method 2 fluctuates greatly. The maximum position estimate error of method 1 is smaller than 0.25 m, which is much better than the result for method 2. Thus, the proposed algorithm can obtain accurate realtime position estimates of the moving platform.
To simulate wheelslip of the moving platform, random walk noise is added to the encoders’ measurements. The estimated result of the encoder measurement bias is given in Figure 8. In this experiment, the moving platform is blocked and stopped after 13 s, while the wheels are still spinning and causing wheelslip. In Figure 8, the blue dotted line represents the true encoder data, the black solid line represents the encoder measurements, and the red dotted line represents the estimated bias based on the proposed algorithm. It can be seen that the true encoder data are equal to zero since the moving platform is blocked; however, the encoder measurement is not equal to zero due to the wheelslip. The estimate encoder measurement biases gradually approach the encoders’ measurement after the wheelslip occurs, which proves that the proposed algorithm can effectively estimate the encoder measurement bias.
6.3. Landing Experiments
First, the following experiment is used to verify the IAEC algorithm. Five desired waypoints, [2, 1, 2], [3, 5, 3], [5, 2, 5], [10, 8, 4], and [12, 2, 5], are given in this experiment. In addition, the velocities and accelerations at the start and end points are zero. The proposed IAEC algorithm is also compared with the algorithm using only waypoint constraint (method 1) and the algorithm using the inequality constraint algorithm (method 2) [26]. The result is shown in Figure 9.
As seen in Figure 9, method 2 gives the optimal trajectory planning result, method 1 has obvious knots at waypoints, and the proposed IAEC algorithm achieves a compromise effect between the two. In terms of calculation time, method 1 spent 10.52 ms to complete trajectory planning on ODROIDXU4, while method 2 spent 380.3 ms to complete trajectory planning, and the IAEC algorithm spent 60.35 ms to complete trajectory planning after 6 iterations. Although the IAEC algorithm reduces the passing accuracy by relaxing the waypoint constraints, it effectively solves the trajectory knotting and requires less computation. For the MPL system mentioned in this paper, it only needs to ensure the accuracy of the final landing waypoint .
To verify the performance of the proposed R2DMPL system, several simulation landing experiments are performed, which are compared with the system proposed in reference [12] (method 3). To our knowledge, the system proposed in method 3 is currently the most reliable MPL system using only onboard sensing and computation. The greatest improvements of the proposed R2DMPL system are the following: (1) the R2DMPL system accounts for the measurement bias of encoders caused by wheelslip and imprecise calibration in pose estimation for moving platforms, and (2) the R2DMPL system also accounts for the trajectory knotting problem caused by dynamic constraints of the MAV. Therefore, the following experiments were carried out under the moving platform with different velocity and motion models; meanwhile, different fixed measurement biases for encoder 1 were added artificially. Define as the average time used for a successful landing, define as the average position error of successful landings, and define as the landing success rate. The results of 20 experiments under different situations are given in Table 2.

As seen in Table 2, both methods 1 and 2 have high landing success rates when the moving platform moves slowly and the measurement bias is small. However, the performance of method 1 is better than that of method 2, especially when encoder 1 has a large measurement bias. Method 1 has an acceptable landing success rate and high landing accuracy based on the accurate modeling; meanwhile, it consumes less flight time based on the efficient and continuous landing trajectory.
Furthermore, the actual landing experiments for linear motion and circular motion are tested, and the video of the experiment can be found at https://www.youtube.com/watch?v=ZljQ1NgEIQ. The linear motion velocity of the moving platform is 0.25 m/s. The circular motion velocity is 0.5 m/s, and the radius is 1 m. The parameters of the landing trajectory planning algorithm are given as follows: is 1.8 m, is 1.68 m, is 1.2 m, is 0.7 m, is 0.012, is 0.003, is 0.7 m/s, and is 0.6. The experimental results are shown as follows.
Figure 10(a) shows the result for the moving platform in linear motion. In Figure 10(a, 1), the moving platform detects the moving platform for the first time. In Figure 10(a, 2), the quadrotor switches from vision tracking to landing trajectory planning. The landing trajectory planning is performed in Figure 10(a, 3). The quadrotor performs trajectory planning until successfully landing on the moving platform in Figure 10(a, 4). Figure 10(b) shows the 3D landing trajectory of the linear motion experiment, and Figure 11 shows the landing result for the moving platform under the circular motion.
(a) Landing experiment
(b) 3D trajectory
(a) Landing experiment
(b) 3D trajectory
7. Conclusions
In this paper, a MAV composite landmark guidance system capable of autonomously landing on a moving platform using only onboard sensing and computing is presented. This system relied on stateoftheart computer vision algorithms, detection and motion estimation of the moving platform, and path planning for fully autonomous landing. No external infrastructure, such as motioncapture systems or an ultrawideband system, is needed. The only prior knowledge about the moving platform is its realtime measurement of the encoders; meanwhile, the unknown measurement biases of encoders are considered in the dynamical model of the moving platform. The proposed system is validated by simulation as well as with realworld experiments using lowcost and lightweight consumer hardware. Finally, the proposed approach achieved a fully autonomous MAV system capable of landing on a moving target with wheelslip and bias in encoder measurements, using only onboard sensing and computing and without relying on any external infrastructure.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work has been supported by the National Natural Science Foundation (NNSF) of China under Grants 61603040 and 61433003, Yunnan Applied Basic Research Projects of China under Grant 201701CF00037, and Yunnan Provincial Science and Technology Department Key Research Program (Engineering): 2018BA070.
References
 B. W. Parkinson, M. L. O'Connor, and K. T. Fitzgibbon, “Aircraft automatic approach and landing using GPS,” in Global Positioning System: Theory and Applications, Volume II, Progress in Astronautics and Aeronautics, pp. 397–425, American Institute of Aeronautics and Astronautics, 1996. View at: Publisher Site  Google Scholar
 Y. H. Shin, S. Lee, and J. Seo, “Autonomous safe landingarea determination for rotorcraft UAVs using multiple IRUWB radars,” Aerospace Science and Technology, vol. 69, pp. 617–624, 2017. View at: Publisher Site  Google Scholar
 H. Duan, S. Shao, B. Su, and L. Zhang, “New development thoughts on the bioinspired intelligence based control for unmanned combat aerial vehicle,” Science China Technological Sciences, vol. 53, no. 8, pp. 2025–2031, 2010. View at: Publisher Site  Google Scholar
 H. Duan, Y. Zhang, and S. Liu, “Multiple UAVs/UGVs heterogeneous coordinated technique based on receding horizon control (RHC) and velocity vector control,” Science China Technological Sciences, vol. 54, no. 4, pp. 869–876, 2011. View at: Publisher Site  Google Scholar
 M. Trittler, T. Rothermel, and W. Fichter, “Autopilot for landing small fixedwing unmanned aerial vehicles with optical sensors,” Journal of Guidance, Control, and Dynamics, vol. 39, no. 9, pp. 2011–2021, 2016. View at: Publisher Site  Google Scholar
 T. Krajník, M. Nitsche, J. Faigl et al., “A practical multirobot localization system,” Journal of Intelligent & Robotic Systems, vol. 76, no. 34, pp. 539–562, 2014. View at: Publisher Site  Google Scholar
 S. Yang, S. A. Scherer, and A. Zell, “An onboard monocular vision system for autonomous takeoff, hovering and landing of a micro aerial vehicle,” Journal of Intelligent & Robotic Systems, vol. 69, no. 14, pp. 499–515, 2013. View at: Publisher Site  Google Scholar
 Y. Bi and H. Duan, “Implementation of autonomous visual tracking and landing for a lowcost quadrotor,” Optik  International Journal for Light and Electron Optics, vol. 124, no. 18, pp. 3296–3300, 2013. View at: Publisher Site  Google Scholar
 A. Borowczyk, D.T. Nguyen, A. P.V. Nguyen, D. Q. Nguyen, D. Saussié, and J. le Ny, “Autonomous landing of a quadcopter on a highspeed ground vehicle,” Journal of Guidance, Control, and Dynamics, vol. 40, no. 9, pp. 2378–2385, 2017. View at: Publisher Site  Google Scholar
 V. Sudevan, A. Shukla, and H. Karki, “Vision based autonomous landing of an Unmanned Aerial Vehicle on a stationary target,” in 2017 17th International Conference on Control, Automation and Systems (ICCAS), pp. 362–367, Jeju, South Korea, October 2017. View at: Publisher Site  Google Scholar
 T. D. Larsen, K. L. Hansen, N. A. Andersen, and O. Ravn, “Design of kalman filters for mobile robots; evaluation of the kinematic and odometric approach,” in Proceedings of the 1999 IEEE International Conference on Control Applications (Cat. No.99CH36328), pp. 1021–1026, Kohala Coast, HI, USA, USA, 1999. View at: Publisher Site  Google Scholar
 D. Falanga, A. Zanchettin, A. Simovic, J. Delmerico, and D. Scaramuzza, “Visionbased autonomous quadrotor landing on a moving platform,” in 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), pp. 200–207, Shanghai, China, 2017. View at: Publisher Site  Google Scholar
 H. H. Helgesen, F. S. Leira, T. A. Johansen, and T. I. Fossen, “Tracking of marine surface objects from unmanned aerial vehicles with a pan/tilt unit using a thermal camera and optical flow,” in 2016 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 107–117, Arlington, VA, USA, June 2016. View at: Publisher Site  Google Scholar
 T. Muskardin, G. Balmer, S. Wlach, K. Kondak, M. Laiacker, and A. Ollero, “Landing of a fixedwing uav on a mobile ground vehicle,” in 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 1237–1242, Stockholm, Sweden, 2016. View at: Publisher Site  Google Scholar
 A. Borowczyk, D.T. Nguyen, A. P.V. Nguyen, D. Q. Nguyen, D. Saussié, and J. Le Ny, “Autonomous landing of a multirotor micro air vehicle on a high velocity ground vehicle,” IFACPapersOnLine, vol. 50, pp. 10488–10494, 2017. View at: Publisher Site  Google Scholar
 K. E. Wenzel, A. Masselli, and A. Zell, “Automatic take off, tracking and landing of a miniature uav on a moving carrier vehicle,” Journal of Intelligent & Robotic Systems, vol. 61, no. 14, pp. 221–238, 2011. View at: Publisher Site  Google Scholar
 X. Wang, G. Lu, Z. Shi, and Y. Zhong, “Robust LQR controller for landing unmanned helicopters on a slope,” in 2016 35th Chinese Control Conference (CCC), pp. 10639–10644, Chengdu, China, 2016. View at: Publisher Site  Google Scholar
 M. W. Mueller, M. Hehn, and R. A. D’Andrea, “A computationally efficient motion primitive for quadrocopter trajectory generation,” IEEE Transactions on Robotics, vol. 31, no. 6, pp. 1294–1310, 2017. View at: Publisher Site  Google Scholar
 M. Hehn and R. D'Andrea, “Quadrocopter trajectory generation and control,” IFAC Proceedings Volumes, vol. 44, pp. 1485–1491, 2011. View at: Publisher Site  Google Scholar
 M. P. Vitus, W. Zhang, and C. J. Tomlin, “A hierarchical method for stochastic motion planning in uncertain environments,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2263–2268, Vilamoura, Portugal, 2012. View at: Publisher Site  Google Scholar
 C. Richter, A. Bry, and N. Roy, “Polynomial trajectory planning for aggressive quadrotor flight in dense indoor environments,” in Robotics Research, pp. 649–666, Springer, 2016. View at: Publisher Site  Google Scholar
 J. Gonalves, J. Lima, and P. Costa, “Real time tracking of an omnidirectional robot  an extended Kalman filter approach,” in Proceedings of the Fifth International Conference on Informatics in Control, Automation and Robotics  Volume 4: ICINCO, pp. 5–10, Funchal  Madeira, Portugal, 2015. View at: Publisher Site  Google Scholar
 V. Lepetit, F. MorenoNoguer, and P. Fua, “EPnP: an accurate O(n) solution to the PnP problem,” International Journal of Computer Vision, vol. 81, no. 2, pp. 155–166, 2009. View at: Publisher Site  Google Scholar
 S. Lynen, M. W. Achtelik, S. Weiss, and M. Chli, “A robust and modular multisensor fusion approach applied to MAV navigation,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3923–3929, Tokyo, Japan, 2013. View at: Publisher Site  Google Scholar
 E. M. Gertz and S. J. Wright, “Objectoriented software for quadratic programming,” ACM Transactions on Mathematical Software, vol. 29, no. 1, pp. 58–81, 2003. View at: Publisher Site  Google Scholar
 J. Chen, T. Liu, and S. Shen, “Online generation of collisionfree trajectories for quadrotor flight in unknown cluttered environments,” in 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 1476–1483, Stockholm, Sweden, 2016. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2019 BoYang Xing et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.