Enhancing Direct Camera Tracking with Dense Feature Descriptors - Robotics Institute Carnegie Mellon University

Enhancing Direct Camera Tracking with Dense Feature Descriptors

Hatem Said Alismail, Brett Browning, and Simon Lucey
Conference Paper, Proceedings of Asian Conference on Computer Vision (ACCV '16), pp. 535 - 551, May, 2016

Abstract

Direct camera tracking is a popular tool for motion estimation. It promises more precise estimates, enhanced robustness as well as denser reconstruction efficiently. However, most direct tracking algorithms rely on the brightness constancy assumption, which is seldom satisfied in the real world. This means that direct tracking is unsuitable when dealing with sudden and arbitrary illumination changes. In this work, we propose a non-parametric approach to address illumination variations in direct tracking. Instead of modeling illumination, or relying on difficult to optimize robust similarity metrics, we propose to directly minimize the squared distance between densely evaluated local feature descriptors. Our approach is shown to perform well in terms of robustness and runtime. The algorithm is evaluated on two direct tracking problems: template tracking and direct visual odometry and using different features descriptors proposed in the literature.

Notes
Final publication is available on link.springer.com

BibTeX

@conference{Alismail-2016-5531,
author = {Hatem Said Alismail and Brett Browning and Simon Lucey},
title = {Enhancing Direct Camera Tracking with Dense Feature Descriptors},
booktitle = {Proceedings of Asian Conference on Computer Vision (ACCV '16)},
year = {2016},
month = {May},
pages = {535 - 551},
publisher = {Springer},
}