/Kintinuous: Spatially Extended KinectFusion

Kintinuous: Spatially Extended KinectFusion

Thomas Whelan, John McDonald, Michael Kaess, Maurice Fallon, Hordur Johannsson and John J. Leonard
Conference Paper, RSS Workshop on RGB-D: Advanced Reasoning with Depth Cameras, July, 2012

Download Publication (PDF)

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract

In this paper we present an extension to the KinectFusion algorithm that permits dense mesh-based mapping of extended scale environments in real-time. This is achieved through (i) altering the original algorithm such that the region of space being mapped by the KinectFusion algorithm can vary dynamically, (ii) extracting a dense point cloud from the regions that leave the KinectFusion volume due to this variation, and, (iii) incrementally adding the resulting points to a triangular mesh representation of the environment. The system is implemented as a set of hierarchical multi-threaded components which are capable of operating in real-time. The architecture facilitates the creation and integration of new modules with minimal impact on the performance on the dense volume tracking and surface reconstruction modules. We provide experimental results demonstrating the system’s ability to map areas considerably beyond the scale of the original KinectFusion algorithm including a two story apartment and an extended sequence taken from a car at night. In order to overcome failure of the iterative closest point (ICP) based odometry in areas of low geometric features we have evaluated the Fast Odometry from Vision (FOVIS) system as an alternative. We provide a comparison between the two approaches where we show a trade off between the reduced drift of the visual odometry approach and the higher local mesh quality of the ICP-based approach. Finally we present ongoing work on incorporating full simultaneous localisation and mapping (SLAM) pose-graph optimisation.

BibTeX Reference
@conference{Whelan-2012-7552,
author = {Thomas Whelan and John McDonald and Michael Kaess and Maurice Fallon and Hordur Johannsson and John J. Leonard},
title = {Kintinuous: Spatially Extended KinectFusion},
booktitle = {RSS Workshop on RGB-D: Advanced Reasoning with Depth Cameras},
year = {2012},
month = {July},
}
2017-09-13T10:39:45+00:00