Learning Shape-based Representation for Visual Localization in Extremely Changing Conditions - Robotics Institute Carnegie Mellon University

Learning Shape-based Representation for Visual Localization in Extremely Changing Conditions

Hae-Gon Jeon, Sunghoon Im, Jean Oh, and Martial Hebert
Conference Paper, Proceedings of (ICRA) International Conference on Robotics and Automation, pp. 7135 - 7141, May, 2020

Abstract

Visual localization is an important task for applications such as navigation and augmented reality, but is a challenging problem when there are changes in scene appearances through day, seasons, or environments. In this paper, we present a convolutional neural network (CNN)-based approach for visual localization across normal to drastic appearance variations such as pre- and post-disaster cases. Our approach aims to address two key challenges: (1) to reduce the biases based on scene textures as in traditional CNNs, our model learns a shape-based representation by training on stylized images; (2) to make the model robust against layout changes, our approach uses the estimated dominant planes of query images as approximate scene coordinates. Our method is evaluated on various scenes including a simulated disaster dataset to demonstrate the effectiveness of our method in significant changes of scene layout. Experimental results show that our method provides reliable camera pose predictions in various changing conditions.

BibTeX

@conference{Jeon-2020-119357,
author = {Hae-Gon Jeon and Sunghoon Im and Jean Oh and Martial Hebert},
title = {Learning Shape-based Representation for Visual Localization in Extremely Changing Conditions},
booktitle = {Proceedings of (ICRA) International Conference on Robotics and Automation},
year = {2020},
month = {May},
pages = {7135 - 7141},
keywords = {Visual localization, deep learning, disaster scenarios},
}