Home/Real-time Depth Enhanced Monocular Odometry

Real-time Depth Enhanced Monocular Odometry

Ji Zhang, Michael Kaess and Sanjiv Singh
Conference Paper, Carnegie Mellon University, Intelligent Robots and Systems (IROS), Chicago, IL, USA, September, 2014

Download Publication (PDF)

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract

Visual odometry can be augmented by depth in- formation such as provided by RGB-D cameras, or from lidars associated with cameras. However, such depth information can be limited by the sensors, leaving large areas in the visual images where depth is unavailable. Here, we propose a method to utilize the depth, even if sparsely available, in recovery of camera motion. In addition, the method utilizes depth by triangulation from the previously estimated motion, and salient visual features for which depth is unavailable. The core of our method is a bundle adjustment that refines the motion estimates in parallel by processing a sequence of images, in a batch optimization. We have evaluated our method in three sensor setups, one using an RGB-D camera, and two using combinations of a camera and a 3D lidar. Our method is rated #2 on the KITTI odometry benchmark irrespective of sensing modality, and is rated #1 among visual odometry methods.

BibTeX Reference
@conference{Zhang-2014-7928,
title = {Real-time Depth Enhanced Monocular Odometry},
author = {Ji Zhang and Michael Kaess and Sanjiv Singh},
booktitle = {Intelligent Robots and Systems (IROS), Chicago, IL, USA},
school = {Robotics Institute , Carnegie Mellon University},
month = {September},
year = {2014},
address = {Pittsburgh, PA},
}
2017-09-13T10:38:53+00:00