Depth Completion with Deep Geometry and Context Guidance

Byeong-Uk Lee, Hae-Gon Jeon, Sunghoon Im and In So Kweon
Conference Paper, IEEE International Conference on Robotics and Automation (ICRA), May, 2019

Download Publication

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract

In this paper, we present an end-to-end convolutional neural network (CNN) for depth completion. Our network consists of a geometry network and a context network. The geometry network, a single encoder-decoder network, learns to optimize a multi-task loss to generate an initial propagated depth map and a surface normal. The complementary outputs allow it to correctly propagate initial sparse depth points in slanted surfaces. The context network extracts a local and a global feature of an image to compute a bilateral weight, which enables it to preserve edges and fine details in the depth maps. At the end, a final output is produced by multiplying the initially propagated depth map with the bilateral weight. In order to validate the effectiveness and the robustness of our network, we performed extensive ablation studies and compared the results against state-of-the-art CNN-based depth completions, where we showed promising results on various scenes.


@conference{Lee-2019-111938,
author = {Byeong-Uk Lee and Hae-Gon Jeon and Sunghoon Im and In So Kweon},
title = {Depth Completion with Deep Geometry and Context Guidance},
booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
year = {2019},
month = {May},
publisher = {IEEE},
keywords = {Depth completion, convolutional neural network},
} 2019-02-27T12:15:29-04:00