Patch to the Future: Unsupervised Visual Prediction

Jacob Walker, Abhinav Gupta, and Martial Hebert
Proc. Computer Vision and Pattern Recognition, March, 2014.


Download
  • Adobe portable document format (pdf) (8MB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
In this paper we present a conceptually simple but sur- prisingly powerful method for visual prediction which com- bines the effectiveness of mid-level visual elements with temporal modeling from a decision-theoretic framework. Our framework can be learned in a completely unsuper- vised manner from a large collection of videos. However, more importantly, because our approach models the predic- tion framework on these mid-level elements, we can not only predict the possible motion in the scene but also predict vi- sual appearances — how are appearances going to change with time. This yields a visual ”hallucination” of probable events on top of the scene. We show that our method is able to accurately predict and visualize simple future events; We also show that our approach is comparable to supervised methods for event prediction.

Notes

Text Reference
Jacob Walker, Abhinav Gupta, and Martial Hebert, "Patch to the Future: Unsupervised Visual Prediction," Proc. Computer Vision and Pattern Recognition, March, 2014.

BibTeX Reference
@inproceedings{Walker_2014_7575,
   author = "Jacob Walker and Abhinav Gupta and Martial Hebert",
   title = "Patch to the Future: Unsupervised Visual Prediction",
   booktitle = "Proc. Computer Vision and Pattern Recognition",
   month = "March",
   year = "2014",
}