Recovering Occlusion Boundaries from a Single Image - Robotics Institute Carnegie Mellon University

Recovering Occlusion Boundaries from a Single Image

Derek Hoiem, Andrew Stein, Alexei A. Efros, and Martial Hebert
Conference Paper, Proceedings of (ICCV) International Conference on Computer Vision, October, 2007

Abstract

Occlusion reasoning, necessary for tasks such as navigation and object search, is an important aspect of everyday life and a fundamental problem in computer vision. We believe that the amazing ability of humans to reason about occlusions from one image is based on an intrinsically 3D interpretation. In this paper, our goal is to recover the occlusion boundaries and depth ordering of free-standing structures in the scene. Our approach is to learn to identify and label occlusion boundaries using the traditional edge and region cues together with 3D surface and depth cues. Since some of these cues require good spatial support (i.e., a segmentation), we gradually create larger regions and use them to improve inference over the boundaries. Our experiments demonstrate the power of a scene-based approach to occlusion reasoning.

BibTeX

@conference{Hoiem-2007-9826,
author = {Derek Hoiem and Andrew Stein and Alexei A. Efros and Martial Hebert},
title = {Recovering Occlusion Boundaries from a Single Image},
booktitle = {Proceedings of (ICCV) International Conference on Computer Vision},
year = {2007},
month = {October},
}