Joint Surface Reconstruction from Monocular Vision and LiDAR

Zimo Li
Master's Thesis, Tech. Report, CMU-RI-TR-19-57, August, 2019

Download Publication

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract

In recent years, dense reconstruction gains popularity because of its broad applications in inspection, mapping and planning. Cameras or LiDARs are generally deployed for 3D dense reconstruction. However, current reconstruction pipelines based on cameras or LiDARs have significant limitations in achieving an accurate and complete scene reconstruction in certain environments due to the properties of LiDARs and cameras.

In this thesis, we propose a new surface reconstruction pipeline that combines monocular camera images and LiDAR measurements from a moving sensor rig to reconstruct dense 3D mesh models of different scenes accurately, especially of scenes that are challenging for visual-only or LiDAR-only reconstruction. In particular, we exploit the advantages of the multi-view stereo algorithm in the reconstruction and integrate with the LiDAR measurements to further improve the robustness and accuracy. Current approaches employing cameras and LiDARs mainly focus on texture mapping with the color information from the camera images or improving camera depth estimation with the LiDAR. However, such methods only exploit the geometric information from single sensor measurements instead of fusing the geometric information from both sensors. In contrast, the proposed pipeline uses a two-stage approach to fuse the structural measurements from LiDAR with the camera images to generate a surface mesh. In the first stage, LiDAR measurements are integrated into a multi-view stereo pipeline to help with the visual point cloud densification. After combining the dense visual point cloud with LiDAR point cloud, a graph-cut algorithm is applied to extract a watertight surface mesh. To validate the proposed pipeline, we collect data from different kinds of scenes and compare results from our method with state-of-the-art reconstruction methods. The experimental results show that our method outperforms both the camera-only and LiDAR-only reconstruction pipelines in terms of accuracy and completeness.


@mastersthesis{Li-2019-117108,
author = {Zimo Li},
title = {Joint Surface Reconstruction from Monocular Vision and LiDAR},
year = {2019},
month = {August},
school = {},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-19-57},
keywords = {3D Reconstruction, Mapping, LiDAR, Camera},
} 2019-08-09T15:16:41-04:00