WALT:Watch And Learn 2D Amodal Representation using Time-lapse Imagery - Robotics Institute Carnegie Mellon University

WALT:Watch And Learn 2D Amodal Representation using Time-lapse Imagery

Conference Paper, Proceedings of (CVPR) Computer Vision and Pattern Recognition, June, 2022

Abstract

Current methods for object detection, segmentation and tracking fail in the presence of severe occlusions in busy urban environments. Labeled real data of occlusions is scarce (even in large datasets) and synthetic data leaves a domain gap, making it hard to explicitly model and learn occlusions. In this work, we present the best of both the real and synthetic worlds for automatic occlusion supervision using a large readily available source of data: time-lapse imagery from stationary webcams observing street intersections over weeks, months or even years. We introduce a new dataset, Watch and Learn Time-lapse (WALT), consisting of 12 (4K and 1080p) cameras capturing urban environments over a year. We exploit this real data in a novel way to first automatically mine a large set of unoccluded objects and then composite them in the same views to generate occlusion scenarios. This self-supervision is strong enough for an amodal network to learn the object-occluder-occluded layer representations. We show how to speed up discovery of unoccluded objects and relate the confidence in this discovery to the rate and accuracy of training of occluded objects. After watching and automatically learning for several days, this approach shows significant performance improvement in detecting and segmenting occluded people and vehicles, over human-supervised amodal approaches.

BibTeX

@conference{Narapureddy-2022-133995,
author = {Dinesh Reddy Narapureddy and Robert Tamburo and Srinivasa G. Narasimhan},
title = {WALT:Watch And Learn 2D Amodal Representation using Time-lapse Imagery},
booktitle = {Proceedings of (CVPR) Computer Vision and Pattern Recognition},
year = {2022},
month = {June},
keywords = {Amodal; Segmentation},
}