Home/Direct Monocular Odometry Using Points and Lines

Direct Monocular Odometry Using Points and Lines

Shichao Yang and Sebastian Scherer
Conference Paper, Proceedings of IEEE International Conference on Robotics and Automation ICRA, May, 2017

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.


Most visual odometry algorithm for a monocular camera focuses on points, either by feature matching, or direct alignment of pixel intensity, while ignoring a common but important geometry entity: edges. In this paper, we propose an odometry algorithm that combines points and edges to benefit from the advantages of both direct and feature based methods.
It works better in textureless environments and is also more robust to lighting changes and fast motion by increasing the convergence basin. We maintain a depth map for the keyframe then in the tracking part, the camera pose is recovered by minimizing both the photometric error and geometric error to the matched edge in a probabilistic framework. In the mapping part, edge is used to speed up and increase stereo matching accuracy. On various public datasets, our algorithm achieves better or comparable performance than state-of-the-art monocular odometry methods. In some challenging textureless environments, our algorithm reduces the state estimation error over 50%.

author = {Shichao Yang and Sebastian Scherer},
title = {Direct Monocular Odometry Using Points and Lines},
booktitle = {Proceedings of IEEE International Conference on Robotics and Automation ICRA},
year = {2017},
month = {May},
} 2018-10-04T14:20:55-04:00