Long-term Visual Map Sparsification with Heterogeneous GNN - Robotics Institute Carnegie Mellon University

Long-term Visual Map Sparsification with Heterogeneous GNN

Ming-Fang Chang, Yipu Zhao, Rajvi Shah, Jakob J. Engel, Michael Kaess, and Simon Lucey
Conference Paper, Proceedings of (CVPR) Computer Vision and Pattern Recognition, pp. 2406 - 2415, June, 2022

Abstract

We address the problem of map sparsification for long-term visual localization. For map sparsification, a commonly employed assumption is that the pre-build map and the later captured localization query are consistent. However, this assumption can be easily violated in the dynamic world. Additionally, the map size grows as new data accumulate through time, causing large data overhead in the long term. In this paper, we aim to overcome the environmental changes and reduce the map size at the same time by selecting points that are valuable to future localization. Inspired by the recent progress in Graph Neural Network (GNN), we propose the first work that models SfM maps as heterogeneous graphs and predicts 3D point importance scores with a GNN, which enables us to directly exploit the rich information in the SfM map graph. Two novel supervisions are proposed: 1) a data-fitting term for selecting valuable points to future localization based on training queries; 2) a K-Cover term for selecting sparse points with full-map coverage. The experiments show that our method selected map points on stable and widely visible structures and outperformed baselines in localization performance.

BibTeX

@conference{Chang-2022-134125,
author = {Ming-Fang Chang and Yipu Zhao and Rajvi Shah and Jakob J. Engel and Michael Kaess and Simon Lucey},
title = {Long-term Visual Map Sparsification with Heterogeneous GNN},
booktitle = {Proceedings of (CVPR) Computer Vision and Pattern Recognition},
year = {2022},
month = {June},
pages = {2406 - 2415},
}