DPSNet: End-to-end Deep Plane Sweep Stereo - Robotics Institute Carnegie Mellon University

DPSNet: End-to-end Deep Plane Sweep Stereo

Sunghoon Im, Hae-Gon Jeon, Stephen Lin, and In So Kweon
Conference Paper, Proceedings of International Conference on Learning Representations (ICLR), December, 2018

Abstract

Multiview stereo aims to reconstruct scene depth from images acquired by a camera under arbitrary motion. Recent methods address this problem through deep learning, which can utilize semantic cues to deal with challenges such as textureless and reflective regions. In this paper, we present a convolutional neural network called DPSNet (Deep Plane Sweep Network) whose design is inspired by best practices of traditional geometry-based approaches. Rather than directly estimating depth and/or optical flow correspondence from image pairs as done in many previous deep learning methods, DPSNet takes a plane sweep approach that involves building a cost volume from deep features using the plane sweep algorithm, regularizing the cost volume via a context-aware cost aggregation, and regressing the depth map from the cost volume. The cost volume is constructed using a differentiable warping process that allows for end-to-end training of the network. Through the effective incorporation of conventional multiview stereo concepts within a deep learning framework, DPSNet achieves state-of-the-art reconstruction results on a variety of challenging datasets.

BibTeX

@conference{Im-2018-110411,
author = {Sunghoon Im and Hae-Gon Jeon and Stephen Lin and In So Kweon},
title = {DPSNet: End-to-end Deep Plane Sweep Stereo},
booktitle = {Proceedings of International Conference on Learning Representations (ICLR)},
year = {2018},
month = {December},
keywords = {Deep Learning, Stereo, Depth, Geometry},
}