Direct Monocular Odometry Using Points and Lines - Robotics Institute Carnegie Mellon University

Direct Monocular Odometry Using Points and Lines

Shichao Yang and Sebastian Scherer
Conference Paper, Proceedings of (ICRA) International Conference on Robotics and Automation, pp. 3871 - 3877, May, 2017

Abstract

Most visual odometry algorithm for a monocular camera focuses on points, either by feature matching, or direct alignment of pixel intensity, while ignoring a common but important geometry entity: edges. In this paper, we propose an odometry algorithm that combines points and edges to benefit from the advantages of both direct and feature based methods. It works better in textureless environments and is also more robust to lighting changes and fast motion by increasing the convergence basin. We maintain a depth map for the keyframe then in the tracking part, the camera pose is recovered by minimizing both the photometric error and geometric error to the matched edge in a probabilistic framework. In the mapping part, edge is used to speed up and increase stereo matching accuracy. On various public datasets, our algorithm achieves better or comparable performance than state-of-the-art monocular odometry methods. In some challenging textureless environments, our algorithm reduces the state estimation error over 50%.

BibTeX

@conference{Yang-2017-107910,
author = {Shichao Yang and Sebastian Scherer},
title = {Direct Monocular Odometry Using Points and Lines},
booktitle = {Proceedings of (ICRA) International Conference on Robotics and Automation},
year = {2017},
month = {May},
pages = {3871 - 3877},
}