
Abstract:
Inverse rendering is the process of recovering the shape, materials, and lighting conditions of an environment from a set of images. Both this process as a whole and its individual components are fundamental to applications ranging from medical imaging to astronomy, and from AR/VR to embodied intelligence. In the thesis work discussed in this talk, we demonstrate that the combination of physics-based light transport and active sensing is fruitful for developing inverse rendering algorithms that are robust to complex visual phenomena encountered in real-world environments. We first derive a differentiable volume rendering procedure for the raw measurements from continuous-wave time-of-flight cameras, which produces higher quality 3D reconstructions than baselines when imaging low-albedo objects, scenes with large depth ranges, and surfaces that exhibit multi-path interference. Next, we extend this framework to perform physics-based inverse rendering of geometry, materials, and lighting under strong global illumination for both active and passive sensors. With it, we showcase novel applications such as accurate reconstruction in the presence of near-field reflections, time-resolved relighting, and time-of-flight imaging *without* time-of-flight cameras. Finally, we present an efficient neural scene representation that has the ability to model a broad class of “non-epipolar” light transport effects, including refractions, scene dynamics, and more.
Thesis Committee Members:
Matthew O’Toole (Chair)
Aswin Sankaranarayanan
Shubham Tulsiani
Noah Snavely (Cornell)