Learning a Context-Dependent Switching Strategy for Robust Visual Odometry - Robotics Institute Carnegie Mellon University

Learning a Context-Dependent Switching Strategy for Robust Visual Odometry

Conference Paper, Proceedings of 10th International Conference on Field and Service Robotics (FSR '15), pp. 249 - 263, June, 2015

Abstract

Many applications for robotic systems require the systems to traverse diverse, unstructured environments. State estimation with Visual Odometry (VO) in these applications is challenging because there is no single algorithm that performs well across all environments and situations. The unique trade-offs inherent to each algorithm mean different algorithms excel in different environments. We develop a method to increase robustness in state estimation by using an ensemble of VO algorithms. The method combines the estimates by dynamically switching to the best algorithm for the current context, according to a statistical model of VO estimate errors. The model is a Random Forest regressor that is trained to predict the accuracy of each algorithm as a function of different features extracted from the sensory in- put. We evaluate our method in a dataset of consisting of four unique environments and eight runs, totaling over 25 minutes of data. Our method reduces the mean translational relative pose error by 3.5% and the angular error by 4.3% compared to the single best odometry algorithm. Compared to the poorest performing odometry algorithm, our method reduces the mean translational error by 39.4% and the angular error by 20.1%.

BibTeX

@conference{Holtz-2015-5980,
author = {Kristen Holtz and Daniel Maturana and Sebastian Scherer},
title = {Learning a Context-Dependent Switching Strategy for Robust Visual Odometry},
booktitle = {Proceedings of 10th International Conference on Field and Service Robotics (FSR '15)},
year = {2015},
month = {June},
pages = {249 - 263},
}