Journal of Sensors / 2021 / Article / Tab 3 / Research Article
Visual and Visual-Inertial SLAM: State of the Art, Classification, and Experimental Benchmarking Table 3 Indications on the robustness to various scenarios of the most famous vSLAM methods.
Algorithm Recommended usages Objectives Lifelong exp. Large envir. Low textured Outdoor (light, outliers) Robust to mov. MonoSLAM [21 ] − − − − − Pose estimation in robotics PTAM [27 ] − − − − ∼ A.R. in small workspaces ORB-SLAM2 [76 ]+ + ∼ + + Robust large path tracking Edge-SLAM [81 ] ∼ + + + + Low-textured environments DTAM [34 ] −? − ∼ ∼ + Robustness to motion blur MobileFusion [66 ] + − − − − 3D object modeling on phone LSD-SLAM [35 ]∼ + − + ∼ Semidense trajectory estimation SVO [67 ] + ∼ + + + Fast, consistent, semidirect method DSO [33 ]+ ∼ + + + Direct and sparse VO method KinectFusion [68 ] + − + + ∼ 3D modeling with the Kinect ElasticFusion [70 ] − − + ∼ ∼ Map-centric vSLAM S-MSCKF [17 ] + ∼ ∼ ∼ + Rapid and consistent Kalman filter ROVIO [26 ]+ ∼ − ∼ ∼ Robust VIO for UAVs OKVIS [73 ] + + ∼ + + Robust stereo VIO for UAVs Vins-Mono [74 ]+ + ∼ + + Full viSLAM method Kimera [60 ] + ∼ + ∼ + VIO+3D semantic-metric mesh VIORB [75 ] + + ∼ + + VI method based on ORB-SLAM
For each difficulty, we consider the method to be either robust (+), to have potential difficulties (∼), or to not be recommended at all (−). This does not reflect the overall accuracy of the method or the robustness of the initialization procedure.