Stacked Hierarchical Labeling - Robotics Institute Carnegie Mellon University

Stacked Hierarchical Labeling

Conference Paper, Proceedings of (ECCV) European Conference on Computer Vision, pp. 57 - 70, September, 2010

Abstract

In this work we propose a hierarchical approach for labeling semantic objects and regions in scenes. Our approach is reminiscent of early vision literature in that we use a decomposition of the image in order to encode relational and spatial information. In contrast to much existing work on structured prediction for scene understanding, we bypass a global probabilistic model and instead directly train a hierarchical inference procedure inspired by the message passing mechanics of some approximate inference procedures in graphical models. This approach mitigates both the theoretical and empirical difficulties of learning probabilistic models when exact inference is intractable. In particular, we draw from recent work in machine learning and break the complex inference process into a hierarchical series of simple machine learning subproblems. Each subproblem in the hierarchy is designed to capture the image and contextual statistics in the scene. This hierarchy spans coarse-to-fine regions and explicitly models the mixtures of semantic labels that may be present due to imperfect segmentation. To avoid cascading of errors and overfitting, we train the learning problems in sequence to ensure robustness to likely errors earlier in the inference sequence and leverage the stacking approach developed by Cohen et al.

BibTeX

@conference{Munoz-2010-10521,
author = {Daniel Munoz and J. Andrew (Drew) Bagnell and Martial Hebert},
title = {Stacked Hierarchical Labeling},
booktitle = {Proceedings of (ECCV) European Conference on Computer Vision},
year = {2010},
month = {September},
pages = {57 - 70},
}