Learning and Reasoning with Visual Correspondence in Time - Robotics Institute Carnegie Mellon University

Learning and Reasoning with Visual Correspondence in Time

PhD Thesis, Tech. Report, CMU-RI-TR-19-75, Robotics Institute, Carnegie Mellon University, September, 2019

Abstract

There is a famous tale in computer vision: Once, a graduate student asked the famous computer vision scientist Takeo Kanade: "What are the three most important problems in computer vision?" Takeo replied: "Correspondence, correspondence, correspondence!" Indeed, even for the most commonly applied Convolutional Neural Networks (ConvNets), they are internally learning representations that lead to correspondence across objects or object parts. The way these networks learn is via human annotations on millions of static images. For example, humans will label images as dog, car, etc. However, this is not how we humans learn. The visual system of an infant develops in a dynamic and continuous environment without using semantics until much later in life.

In this thesis, I will argue that we need to go beyond images and exploit the massive amount of correspondence in videos. In videos, we have millions of pixels linked to each other by time. I will discuss how to learn correspondence from continuous observations in videos without any human supervision. Once the correspondence is given, it can be utilized as supervision in training the ConvNets, eliminating the need for manual labels. Besides supervision, capturing long-range correspondence is also the key to video understanding as well as interaction reasoning. The effectiveness of these ideas will be demonstrated on tasks including object recognition, tracking, action recognition, affordance and physical property estimation.

BibTeX

@phdthesis{Wang-2019-117586,
author = {Xiaolong Wang},
title = {Learning and Reasoning with Visual Correspondence in Time},
year = {2019},
month = {September},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-19-75},
keywords = {Video; Correspondence; Self-Supervised Learning; Interaction Reasoning},
}