Home/From 3D Scene Geometry to Human Workspace

From 3D Scene Geometry to Human Workspace

Abhinav Gupta, Scott Satkin, Alexei A. Efros and Martial Hebert
Conference Paper, Carnegie Mellon University, IEEE Conference on Computer Vision and Pattern Recognition, pp. 1961-1968, May, 2011

Download Publication (PDF)

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract

We present a human-centric paradigm for scene understanding. Our approach goes beyond estimating 3D scene geometry and predicts the “workspace” of a human which is represented by a data-driven vocabulary of human interactions. Our method builds upon the recent work in indoor scene understanding and the availability of motion capture data to create a joint space of human poses and scene geometry by modeling the physical interactions between the two. This joint space can then be used to predict potential human poses and joint locations from a single image. In a way, this work revisits the principle of Gibsonian affordances, reinterpreting it for the modern, data-driven era.

BibTeX Reference
@conference{Gupta-2011-7257,
title = {From 3D Scene Geometry to Human Workspace},
author = {Abhinav Gupta and Scott Satkin and Alexei A. Efros and Martial Hebert},
booktitle = {IEEE Conference on Computer Vision and Pattern Recognition},
keyword = {Human-Centric Scene Understanding, Human Workspace, 3D Scene Understanding, Affordances, task-based scene understanding},
sponsor = {ONR},
school = {Robotics Institute , Carnegie Mellon University},
month = {May},
year = {2011},
pages = {1961-1968},
address = {Pittsburgh, PA},
}
2017-09-13T10:40:22+00:00