///Zimo Li – MSR Thesis Talk
Loading Events
This event has passed.

MSR Speaking Qualifier


Zimo Li MSR Student Robotics Institute,
Carnegie Mellon University
Tuesday, July 2
1:00 pm
- 2:30 pm
NSH 4305
Zimo Li – MSR Thesis Talk

Title: Joint Surface Reconstruction from Monocular Vision and LiDAR



In recent years, dense reconstruction gains popularity because of its broad applications in inspection, mapping, and planning. Cameras or LiDARs are generally deployed for 3D dense reconstruction. However, current reconstruction pipelines based on cameras or LiDARs have significant limitations in achieving an accurate and complete scene reconstruction in certain environments due to the properties of LiDARs and cameras.

We propose a new surface reconstruction pipeline that combines monocular camera images and LiDAR measurements from a moving sensor rig to reconstruct dense 3D mesh models of different scenes accurately, especially of scenes that are challenging for visual-only or LiDAR-only reconstruction. In particular, we exploit the advantages of the multi-view stereo algorithm in the reconstruction and integrate with the LiDAR measurements to further improve the robustness and accuracy. Current approaches employing cameras and LiDARs mainly focus on texture mapping with the color information from the camera images or improving camera depth estimation with the LiDAR. However, such methods only exploit the geometric information from single sensor measurements instead of fusing the geometric information from both sensors. In contrast, the proposed pipeline uses a two-stage approach to fuse the structural measurements from LiDAR with the camera images to generate a surface mesh. In the first stage, LiDAR measurements are integrated into a multi-view stereo pipeline to help with the visual point cloud densification. After combining the dense visual point cloud with the LiDAR point cloud, a graph-cut algorithm is applied to extract a watertight surface mesh. To validate the proposed pipeline, we collect data from different scenes and compare results from our method with state-of-the-art reconstruction methods. The experimental results show that our method outperforms both the camera-only and LiDAR-only reconstruction pipelines in terms of accuracy and completeness.



Michael Kaess (advisor)

Simon Lucey

Kumar Shaurya Shankar