Structure Discovery in Multi-modal Data: a Region-based Approach - Robotics Institute Carnegie Mellon University

Structure Discovery in Multi-modal Data: a Region-based Approach

Conference Paper, Proceedings of (ICRA) International Conference on Robotics and Automation, pp. 5695 - 5702, May, 2011

Abstract

The ability of a perception system to discern what is important in a scene and what is not is an invaluable asset, with multiple applications in object recognition, people detection and SLAM, among others. In this paper, we aim to analyze all sensory data available to separate a scene into a few physically meaningful parts, which we term structure, while discarding background clutter. In particular, we consider the combination of image and range data, and base our decision in both appearance and 3D shape. Our main contribution is the development of a framework to perform scene segmentation that preserves physical objects using multi-modal data. We combine image and range data using a novel mid-level fusion technique based on the concept of regions that avoids any pixel-level correspondences between data sources. We associate groups of pixels with 3D points into multi-modal regions that we term regionlets, and measure the structure-ness of each regionlet using simple, bottom-up cues from image and range features. We show that the highest-ranked regionlets correspond to the most prominent objects in the scene. We verify the validity of our approach on 105 scenes of household environments.

BibTeX

@conference{Romea-2011-7249,
author = {Alvaro Collet Romea and Siddhartha Srinivasa and Martial Hebert},
title = {Structure Discovery in Multi-modal Data: a Region-based Approach},
booktitle = {Proceedings of (ICRA) International Conference on Robotics and Automation},
year = {2011},
month = {May},
pages = {5695 - 5702},
}