|Precise knowledge of a robots’s ego-motion is a crucial requirement for higher level tasks like autonomous navigation. Bundle adjustment based monocular visual odometry has proven to successfully estimate the motion of a robot for short sequences, but it suffers from an ambiguity in scale. Hence, approaches that only optimize locally are prone to drift in scale
for sequences that span hundreds of frames.
In this paper we present an approach to monocular visual odometry that compensates for drift in scale by applying constraints imposed by the known camera mounting and assumptions about the environment. To this end, we employ a continuously updated point cloud to estimate the camera poses based on 2d-3d-correspondences. Within this set of camera poses, we identify keyframes which are combined into a sliding window and reﬁned by bundle adjustment. Subsequently, we update the scale based on robustly tracked features on the road surface. Results on real datasets demonstrate a signiﬁcant increase in accuracy compared to the non-scaled scheme.
|Localization, Navigation, Robot Vision|
Number of pages: 6
|Bernd Manfred Kitt, Joern Rehder, Andrew D. Chambers, Miriam Schonbein, Henning Lategahn, and Sanjiv Singh, "Monocular Visual Odometry using a Planar Road Model to Solve Scale Ambiguity ," Proc. European Conference on Mobile Robots, October, 2011.|
author = "Bernd Manfred Kitt and Joern Rehder and Andrew D Chambers and Miriam Schonbein and Henning Lategahn and Sanjiv Singh",
title = "Monocular Visual Odometry using a Planar Road Model to Solve Scale Ambiguity ",
booktitle = "Proc. European Conference on Mobile Robots",
month = "October",
year = "2011",
|The Robotics Institute is part of the School of Computer Science, Carnegie Mellon University.|
Contact Us | Update Instructions