A Joint Optimization Approach of LiDAR-Camera Fusion for Accurate Dense 3-D Reconstructions - Robotics Institute Carnegie Mellon University

A Joint Optimization Approach of LiDAR-Camera Fusion for Accurate Dense 3-D Reconstructions

Zhen W, Hu Y, Liu J, and Scherer S
Journal Article, IEEE Robotics and Automation Letters, Vol. 4, No. 4, pp. 3585-3592, October, 2019

Abstract

Fusing data from LiDAR and camera is conceptually attractive because of their complementary properties. For instance, camera images are of higher resolution and have colors, while LiDAR data provide more accurate range measurements and have a wider field of view. However, the sensor fusion problem remains challenging since it is difficult to find reliable correlations between data of very different characteristics (geometry versus texture, sparse versus dense). This letter proposes an offline LiDAR-camera fusion method to build dense, accurate 3-D models. Specifically, our method jointly solves a bundle adjustment problem and a cloud registration problem to compute camera poses and the sensor extrinsic calibration. In experiments, we show that our method can achieve an average accuracy of 2.7 mm and resolution of 70 points/cm2 by comparing to the ground truth data from a survey scanner. Furthermore, the extrinsic calibration result is discussed and shown to outperform the state-of-the-art method.

BibTeX

@article{Zhen-2019-139755,
author = {Zhen W, Hu Y, Liu J, Scherer S},
title = {A Joint Optimization Approach of LiDAR-Camera Fusion for Accurate Dense 3-D Reconstructions},
journal = {IEEE Robotics and Automation Letters},
year = {2019},
month = {October},
volume = {4},
number = {4},
pages = {3585-3592},
keywords = {Calibration and Identification,Mapping,Sensor Fusion},
}