Dense Surface Reconstruction from Monocular Vision and LiDAR

Zimo Li, Prakruti C. Gogia and Michael Kaess
Conference Paper, Proc. IEEE Intl. Conf. on Robotics and Automation, ICRA, May, 2019

Download Publication

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.


In this work, we develop a new surface reconstruction pipeline that combines monocular camera images and LiDAR measurements from a moving sensor rig to reconstruct dense 3D mesh models of indoor scenes. For surface reconstruction, the 3D LiDAR and camera are widely deployed for gathering geometric information from environments. Current state-of-the-art multi-view stereo or LiDAR-only reconstruction methods cannot reconstruct indoor environments accurately due to shortcomings of each sensor type. In our approach, LiDAR measurements are integrated into a multi-view stereo pipeline for point cloud densification and tetrahedralization. In addition to that, a graph cut algorithm is utilized to generate a watertight surface mesh. Because our proposed method leverages the complementary nature of these two sensors, the accuracy and completeness of the output model are improved. The experimental results on real world data show that our method significantly outperforms both the state-of-the-art camera-only and LiDAR-only reconstruction methods in accuracy and completeness.

author = {Zimo Li and Prakruti C. Gogia and Michael Kaess},
title = {Dense Surface Reconstruction from Monocular Vision and LiDAR},
booktitle = {Proc. IEEE Intl. Conf. on Robotics and Automation, ICRA},
year = {2019},
month = {May},
} 2019-08-15T16:01:50-04:00