GPU Accelerated Robust Scene Reconstruction - Robotics Institute Carnegie Mellon University

GPU Accelerated Robust Scene Reconstruction

Wei Dong, Jaesik Park, Yi Yang, and Michael Kaess
Conference Paper, Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 7863 - 7870, November, 2019

Abstract

We propose a fast and accurate 3D reconstruction system that takes a sequence of RGB-D frames and produces a globally consistent camera trajectory and a dense 3D geometry. We redesign core modules of a state-of-the-art offline reconstruction pipeline to maximally exploit the power of GPU. We introduce GPU accelerated core modules that include RGBD odometry, geometric feature extraction and matching, point cloud registration, volumetric integration, and mesh extraction. Therefore, while being able to reproduce the results of the highfidelity offline reconstruction system, our system runs more than 10 times faster on average. Nearly 10Hz can be achieved in medium size indoor scenes, making our offline system even comparable to online Simultaneous Localization and Mapping (SLAM) systems in terms of the speed. Experimental results show that our system produces more accurate results than several state-of-the-art online systems. The system is open source at https://github. com/theNded/Open3D.

BibTeX

@conference{Dong-2019-122770,
author = {Wei Dong and Jaesik Park and Yi Yang and Michael Kaess},
title = {GPU Accelerated Robust Scene Reconstruction},
booktitle = {Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems},
year = {2019},
month = {November},
pages = {7863 - 7870},
}