Dense Surface Reconstruction from Monocular Vision and LiDAR - Robotics Institute Carnegie Mellon University

Dense Surface Reconstruction from Monocular Vision and LiDAR

Zimo Li, Prakruti C. Gogia, and Michael Kaess
Conference Paper, Proceedings of (ICRA) International Conference on Robotics and Automation, pp. 6905 - 6911, May, 2019

Abstract

In this work, we develop a new surface reconstruction pipeline that combines monocular camera images and LiDAR measurements from a moving sensor rig to reconstruct dense 3D mesh models of indoor scenes. For surface reconstruction, the 3D LiDAR and camera are widely deployed for gathering geometric information from environments. Current state-of-the-art multi-view stereo or LiDAR-only reconstruction methods cannot reconstruct indoor environments accurately due to shortcomings of each sensor type. In our approach, LiDAR measurements are integrated into a multi-view stereo pipeline for point cloud densification and tetrahedralization. In addition to that, a graph cut algorithm is utilized to generate a watertight surface mesh. Because our proposed method leverages the complementary nature of these two sensors, the accuracy and completeness of the output model are improved. The experimental results on real world data show that our method significantly outperforms both the state-of-the-art camera-only and LiDAR-only reconstruction methods in accuracy and completeness.

BibTeX

@conference{Li-2019-116376,
author = {Zimo Li and Prakruti C. Gogia and Michael Kaess},
title = {Dense Surface Reconstruction from Monocular Vision and LiDAR},
booktitle = {Proceedings of (ICRA) International Conference on Robotics and Automation},
year = {2019},
month = {May},
pages = {6905 - 6911},
}