3D Inference from Unposed Sparse View Images - Robotics Institute Carnegie Mellon University

3D Inference from Unposed Sparse View Images

Master's Thesis, Tech. Report, CMU-RI-TR-24-15, May, 2024

Abstract

We propose UpFusion, a system that can perform novel view synthesis and infer 3D representations for generic objects given a sparse set of reference images without corresponding pose information. Current sparse-view 3D inference methods typically rely on camera poses to geometrically aggregate information from input views, but are not robust in-the-wild when such information is unavailable/inaccurate. In contrast, UpFusion sidesteps this requirement by learning to implicitly leverage the available images as context in a conditional generative model for synthesizing novel views. We incorporate two complementary forms of conditioning into diffusion models for leveraging the input views: a) via inferring query-view aligned features using a scene-level transformer, b) via intermediate attentional layers that can directly observe the input image tokens. We show that this mechanism allows generating high-fidelity novel views while improving the synthesis quality given additional (unposed) images. We evaluate our approach on the Co3Dv2 and Google Scanned Objects datasets and demonstrate the benefits of our method over pose-reliant sparse-view methods as well as single-view methods that cannot leverage additional views. Finally, we also show that our learned model can generalize beyond the training categories and even allow reconstruction from self-captured images of generic objects in-the-wild.

BibTeX

@mastersthesis{Nagoor Kani-2024-140626,
author = {Bharath Raj Nagoor Kani},
title = {3D Inference from Unposed Sparse View Images},
year = {2024},
month = {May},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-24-15},
keywords = {Sparse-view 3D Reconstruction, Novel-view Synthesis, 3D Generation},
}