/Co-inference for Multi-modal Scene Analysis

Co-inference for Multi-modal Scene Analysis

Daniel Munoz, J. Andrew (Drew) Bagnell and Martial Hebert
Conference Paper, European Conference on Computer Vision (ECCV), October, 2012

Download Publication (PDF)

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.


We address the problem of understanding scenes from multiple sources of sensor data (e.g., a camera and a laser scanner) in the case where there is no one-to-one correspondence across modalities (e.g., pixels and 3-D points). This is an important scenario that frequently arises in practice not only when two different types of sensors are used, but also when the sensors are not co-located and have different sampling rates. Previous work has addressed this problem by restricting interpretation to a single representation in one of the domains, with augmented features that attempt to encode the information from the other modalities. Instead, we propose to analyze all modalities simultaneously while propagating information across domains during the inference procedure. In addition to the immediate benefit of generating a complete interpretation in all of the modalities, we demonstrate that this co-inference approach also improves performance over the canonical approach.

BibTeX Reference
author = {Daniel Munoz and J. Andrew (Drew) Bagnell and Martial Hebert},
title = {Co-inference for Multi-modal Scene Analysis},
booktitle = {European Conference on Computer Vision (ECCV)},
year = {2012},
month = {October},