Home/Patch to the Future: Unsupervised Visual Prediction

Patch to the Future: Unsupervised Visual Prediction

Jacob Walker, Abhinav Gupta and Martial Hebert
Conference Paper, Carnegie Mellon University, Proc. Computer Vision and Pattern Recognition, March, 2014

Download Publication (PDF)

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.


In this paper we present a conceptually simple but sur- prisingly powerful method for visual prediction which com- bines the effectiveness of mid-level visual elements with temporal modeling from a decision-theoretic framework. Our framework can be learned in a completely unsuper- vised manner from a large collection of videos. However, more importantly, because our approach models the predic- tion framework on these mid-level elements, we can not only predict the possible motion in the scene but also predict vi- sual appearances — how are appearances going to change with time. This yields a visual ”hallucination” of probable events on top of the scene. We show that our method is able to accurately predict and visualize simple future events; We also show that our approach is comparable to supervised methods for event prediction.

BibTeX Reference
title = {Patch to the Future: Unsupervised Visual Prediction},
author = {Jacob Walker and Abhinav Gupta and Martial Hebert},
booktitle = {Proc. Computer Vision and Pattern Recognition},
school = {Robotics Institute , Carnegie Mellon University},
month = {March},
year = {2014},
address = {Pittsburgh, PA},