Towards Segmenting Everything that Moves - Robotics Institute Carnegie Mellon University

Towards Segmenting Everything that Moves

Achal Dave, Pavel Tokmakov, and Deva Ramanan
Workshop Paper, ICCV '19 Workshop, pp. 1493 - 1502, October, 2019

Abstract

Video analysis is the task of perceiving the world as it changes. Often, though, most of the world doesn't change all that much: it's boring. For many applications such as action detection or robotic interaction, segmenting all moving objects is a crucial first step. While this problem has been well-studied in the field of spatiotemporal segmentation, virtually none of the prior works use learning-based approaches, despite significant advances in single-frame instance segmentation. We propose the first deep-learning based approach for video instance segmentation. Our two-stream models' architecture is based on Mask R-CNN, but additionally takes optical flow as input to identify moving objects. It then combines the motion and appearance cues to correct motion estimation mistakes and capture the full extent of objects. We show state-of-the-art results on the Freiburg Berkeley Motion Segmentation dataset by a wide margin. One potential worry with learning-based methods is that they might overfit to the particular type of objects that they have been trained on. While current recognition systems tend to be limited to a "closed world" of N objects on which they are trained, our model seems to segment almost anything that moves.

BibTeX

@workshop{Dave-2019-121128,
author = {Achal Dave and Pavel Tokmakov and Deva Ramanan},
title = {Towards Segmenting Everything that Moves},
booktitle = {Proceedings of ICCV '19 Workshop},
year = {2019},
month = {October},
pages = {1493 - 1502},
}