We have developed a fully automatic algorithm for interpolating views of a completely non-rigid dynamic event across both space and time. The algorithm operates by combining images captured across space to compute voxel models of the scene shape at each time instant, and images captured across time to compute the ``scene flow'' between the voxel models. The
scene flow is the non-rigid 3D motion of every point in the scene. To interpolate in time, the voxel models are ``flowed'' using an appropriate multiple of the scene flow and a smooth surface fit to the result. The novel image is then computed by ray-casting to the surface at the intermediate time instants, following the scene flow to the neighboring time instants, projecting into the input images at those times, and finally blending the results. We use our algorithm to create re-timed slow-motion fly-by movies of dynamic real-world events. An example of such a movie is included below. On the left is the closest input image, and on the right the spatio-temporally interpolated image.
The algorithm can of course be used to use interpolate in time alone, as the following slow-motion re-timed video shows. Again, on the left we show the closest input image (in time) and on the right the temporally interpolated image.
Our algorithm contains a number of technical innovations to improve rendering quality. One example is the use of "duplicate voxels" to ensure smooth continuous motion across multiple time steps. See the movie below for an illustration of our algorithm with and without duplicate voxels.
Please see here for more examples.
|The Robotics Institute is part of the School of Computer Science, Carnegie Mellon University.|
Contact Us | Update Instructions