Depth Completion with Deep Geometry and Context Guidance - Robotics Institute Carnegie Mellon University

Depth Completion with Deep Geometry and Context Guidance

Byeong-Uk Lee, Hae-Gon Jeon, Sunghoon Im, and In So Kweon
Conference Paper, Proceedings of (ICRA) International Conference on Robotics and Automation, pp. 3281 - 3287, May, 2019

Abstract

In this paper, we present an end-to-end convolutional neural network (CNN) for depth completion. Our network consists of a geometry network and a context network. The geometry network, a single encoder-decoder network, learns to optimize a multi-task loss to generate an initial propagated depth map and a surface normal. The complementary outputs allow it to correctly propagate initial sparse depth points in slanted surfaces. The context network extracts a local and a global feature of an image to compute a bilateral weight, which enables it to preserve edges and fine details in the depth maps. At the end, a final output is produced by multiplying the initially propagated depth map with the bilateral weight. In order to validate the effectiveness and the robustness of our network, we performed extensive ablation studies and compared the results against state-of-the-art CNN-based depth completions, where we showed promising results on various scenes.

BibTeX

@conference{Lee-2019-111938,
author = {Byeong-Uk Lee and Hae-Gon Jeon and Sunghoon Im and In So Kweon},
title = {Depth Completion with Deep Geometry and Context Guidance},
booktitle = {Proceedings of (ICRA) International Conference on Robotics and Automation},
year = {2019},
month = {May},
pages = {3281 - 3287},
publisher = {IEEE},
keywords = {Depth completion, convolutional neural network},
}