/An Uncertain Future: Forecasting from Static Images using Variational Autoencoders

An Uncertain Future: Forecasting from Static Images using Variational Autoencoders

Jacob Walker, Carl Doersch, Abhinav Gupta and Martial Hebert
Conference Paper, October, 2016

Download Publication (PDF)

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract

In a given scene, humans can easily predict a set of immediate
future events that might happen. However, pixel-level anticipation in
computer vision is difficult because machine learning struggles with the
ambiguity in predicting the future. In this paper, we focus on predicting
the dense trajectory of pixels in a scene—what will move in the scene,
where it will travel, and how it will deform over the course of one second.
We propose a conditional variational autoencoder as a solution to
this problem. In this framework, direct inference from the image shapes
the distribution of possible trajectories while latent variables encode information
that is not available in the image. We show that our method
predicts events in a variety of scenes and can produce multiple different
predictions for an ambiguous future. We also find that our method learns
a representation that is applicable to semantic vision tasks.

Notes
Associated Lab - Manipulation LabComputer Vision Lab

BibTeX Reference
@conference{Walker-2016-103535,
author = {Jacob Walker and Carl Doersch and Abhinav Gupta and Martial Hebert},
title = {An Uncertain Future: Forecasting from Static Images using Variational Autoencoders},
year = {2016},
month = {October},
}
2018-02-26T15:38:43+00:00