Loading Events

PhD Thesis Proposal

February

7
Wed
Benjamin Attal PhD Student Robotics Institute,
Carnegie Mellon University
Wednesday, February 7
10:00 am to 11:30 am
NSH 3305
Combining Physics-Based Light Transport and Neural Fields for Robust Inverse Rendering

Abstract:  
Inverse rendering — the process of recovering shape, material, and/or lighting of an object or environment from a set of images — is essential for applications in robotics and elsewhere, from AR/VR to perception on self-driving vehicles. While it is possible to perform inverse rendering from color images alone, it is often far easier with the help of active sensors like time-of-flight and structured-light depth cameras. Many recent methods that make use of such sensors also take advantage of differentiable rendering, a powerful tool that reduces a variety of inverse problems to straightforward gradient-based optimization. What is more, physics-based extensions of differentiable rendering have the ability to account for effects (e.g. volumetric interactions, reflections/refractions, and global illumination) that often break traditional algorithms, but unfortunately at an increased computational cost. In this thesis, we show how to combine physics-based light transport and neural fields in order to build inverse rendering algorithms that are both efficient and robust to challenging effects that appear in real world settings. In the first work of this thesis we derive a differentiable volume rendering procedure for the raw measurements from continuous-wave time-of-flight cameras, which enables better quality 3D reconstruction under high sensor noise, phase wrapping, and multi-path interference. In the next two works we show how sample networks, neural fields which define perturbations to volumetric sample points, can both accelerate and extend the modeling power of volume rendering to support highly reflective/refractive objects. Third, we propose to combine the advantages of physics-based light transport and sample networks in order to design a method that performs efficient inverse rendering of geometry and materials under strong near-field global illumination. Finally, we propose a general plug-and-play method that leverages sample networks in order to account for errors in inverse rendering arising from a broad range of non-idealities; including some of those discussed above, such as reflections/refractions, as well as others like imperfect camera calibration and object dynamics.

Thesis Committee Members:
Matthew O’Toole, Chair
Aswin Sankaranarayanan
Shubham Tulsiani
Noah Snavely, Cornell

More Information