视觉里程计综述
- 引言
-
- Visual Odometry or VSLAM
-
-
-
- OF-VO:Robust and Efficient Stereo Visual Odometry Using Points and Feature Optical Flow
- SLAMBook
- SVO: Fast Semi-Direct Monocular Visual Odometry
- Robust Odometry Estimation for RGB-D Cameras
- Parallel Tracking and Mapping for Small AR Workspaces
- ORBSLAM
- A ROS Implementation of the Mono-Slam Algorithm
- DTAM: Dense tracking and mapping in real-time
- LSD-SLAM: Large-Scale Direct Monocular SLAM
- RGBD-Odometry (Visual Odometry based RGB-D images)
- Py-MVO: Monocular Visual Odometry using Python
- Stereo-Odometry-SOFT
- monoVO-python
- DVO:Robust Odometry Estimation for RGB-D Cameras
- Dense Visual Odometry and SLAM (dvo_slam)
- REVO:Robust Edge-based Visual Odometry
- xivo
- PaoPaoRobot
- ygz-slam
- RTAB MAP
- MYNT-EYE
- Kintinuous
- ElasticFusion
- Co-Fusion:Real-time Segmentation, Tracking and Fusion of Multiple Objects
-
-
- Visual Inertial Odometry or VIO-SLAM
-
-
-
- R-VIO:Robocentric Visual-Inertial Odometry
- Kimera-VIO: Open-Source Visual Inertial Odometry
- ADVIO: An Authentic Dataset for Visual-Inertial Odometry
- MSCKF_VIO:Robust Stereo Visual Inertial Odometry for Fast Autonomous Flight
- LIBVISO2: C++ Library for Visual Odometry 2
- Stereo Visual SLAM for Mobile Robots Navigation
- Combining Edge Images and Depth Maps for Robust Visual Odometry
- HKUST Aerial Robotics Group
- VINS-Fusion:Online Temporal Calibration for Monocular Visual-Inertial Systems
- Monocular Visual-Inertial State Estimation for Mobile Augmented Reality
- Computer Vision Group TUM Department of Informatics Technical University of Munich
- Visual-Inertial DSOhttps://vision.in.tum.de/research/vslam/vi-dso
- Stereo odometry based on careful feature selection and tracking
- OKVIS: Open Keyframe-based Visual-Inertial SLAM
- Trifo-VIO: Robust and Efficient Stereo Visual Inertial Odometry using Points and Lines
- PL-VIO: Tightly-Coupled Monocular Visual–Inertial Odometry Using Point and Line Features
- Overview of visual inertial navigation
-
-
- Based CNN(Net VO or Net VSLAM)
-
-
-
- VINet: Visual-Inertial Odometry as a Sequence-to-Sequence Learning Problem
- DeepVO: Towards End-to-End Visual Odometry with Deep Recurrent Convolutional Neural Networks
- UnDeepVO - Implementation of Monocular Visual Odometry through Unsupervised Deep Learning
- (ESP-VO) End-to-End, Sequence-to-Sequence Probabilistic Visual Odometry through Deep Neural Networks
-
-
- Lidar Visual odometry
#########################################
github:https://github.com/MichaelBeechan
优快云:https://blog.youkuaiyun.com/u011344545
欢迎star/fork:https://github.com/MichaelBeechan/Visual-Odometry-Review
#########################################
这是一篇关于目前开源SLAM、开源VO视觉里程计的综述博客
引言
SLAM is mainly divided into two parts: the front end and the back end. The front end is the visual odometery(VO), which roughly estimates the motion of the camera based on the information of adjacent images and provides a good initial value fo