Image-Based Spatio-Temporal Modeling and View Interpolation of Dynamic Events - Robotics Institute Carnegie Mellon University

Image-Based Spatio-Temporal Modeling and View Interpolation of Dynamic Events

Sundar Vedula, Simon Baker, and Takeo Kanade
Journal Article, ACM Transactions on Graphics (TOG), Vol. 24, No. 2, pp. 240 - 261, April, 2005

Abstract

We present an approach for modeling and rendering a dynamic, real-world event from an arbitrary viewpoint, and at any time, using images captured from multiple video cameras. The event is modeled as a non-rigidly varying dynamic scene, captured by many images from different viewpoints, at discrete times. First, the spatio-temporal geometric properties (shape and instantaneous motion) are computed. The view synthesis problem is then solved using a reverse mapping algorithm, ray-casting across space and time, to compute a novel image from any viewpoint in the 4D space of position and time. Results are shown on real-world events captured in the CMU 3D Room, by creating synthetic renderings of the event from novel, arbitrary positions in space and time. Multiple such re-created renderings can be put together to create re-timed fly-by movies of the event, with the resulting visual experience richer than that of a regular video clip, or switching between images from multiple cameras.

BibTeX

@article{Vedula-2005-9141,
author = {Sundar Vedula and Simon Baker and Takeo Kanade},
title = {Image-Based Spatio-Temporal Modeling and View Interpolation of Dynamic Events},
journal = {ACM Transactions on Graphics (TOG)},
year = {2005},
month = {April},
volume = {24},
number = {2},
pages = {240 - 261},
keywords = {Image-based modeling and rendering, dynamicscenes, spatio-temporal view interpolation, non-rigid motion, voxelmodels, space carving, scene flow.},
}