Deep Non-Rigid Structure from Motion - Robotics Institute Carnegie Mellon University

Deep Non-Rigid Structure from Motion

C. Kong and S. Lucey
Conference Paper, Proceedings of (ICCV) International Conference on Computer Vision, pp. 1558 - 1567, October, 2019

Abstract

Current non-rigid structure from motion (NRSfM) algorithms are mainly limited with respect to: (i) the number of images, and (ii) the type of shape variability they can handle. This has hampered the practical utility of NRSfM for many applications within vision. In this paper we propose a novel deep neural network to recover camera poses and 3D points solely from an ensemble of 2D image coordinates. The proposed neural network is mathematically interpretable as a multi-layer block sparse dictionary learning problem, and can handle problems of unprecedented scale and shape complexity. Extensive experiments demonstrate the impressive performance of our approach where we exhibit superior precision and robustness against all available state-of-the-art works in the order of magnitude. We further propose a quality measure (based on the network weights) which circumvents the need for 3D ground-truth to ascertain the confidence we have in the reconstruction.

BibTeX

@conference{Kong-2019-121008,
author = {C. Kong and S. Lucey},
title = {Deep Non-Rigid Structure from Motion},
booktitle = {Proceedings of (ICCV) International Conference on Computer Vision},
year = {2019},
month = {October},
pages = {1558 - 1567},
}