Learning Object-specific Grasp Affordance Densities - Robotics Institute Carnegie Mellon University

Learning Object-specific Grasp Affordance Densities

Renaud Detry, Emre Baseski, Mila Popovic, Younes Touati, Norbert Kruger, Oliver Kroemer, Jan Peters, and Justus Piater
Conference Paper, Proceedings of IEEE 8th International Conference on Development and Learning (DEVLRN '09), June, 2009

Abstract

This paper addresses the issue of learning and representing object grasp affordances, i.e. object-gripper relative configurations that lead to successful grasps. The purpose of grasp affordances is to organize and store the whole knowledge that an agent has about the grasping of an object, in order to facilitate reasoning on grasping solutions and their achievability. The affordance representation consists in a continuous probability density function defined on the 6D gripper pose space – 3D position and orientation –, within an object-relative reference frame. Grasp affordances are initially learned from various sources, e.g. from imitation or from visual cues, leading to grasp hypothesis densities. Grasp densities are attached to a learned 3D visual object model, and pose estimation of the visual model allows a robotic agent to execute samples from a grasp hypothesis density under various object poses. Grasp outcomes are used to learn grasp empirical densities, i.e. grasps that have been confirmed through experience. We show the result of learning grasp hypothesis densities from both imitation and visual cues, and present grasp empirical densities learned from physical experience by a robot.

BibTeX

@conference{Detry-2009-112248,
author = {Renaud Detry and Emre Baseski and Mila Popovic and Younes Touati and Norbert Kruger and Oliver Kroemer and Jan Peters and Justus Piater},
title = {Learning Object-specific Grasp Affordance Densities},
booktitle = {Proceedings of IEEE 8th International Conference on Development and Learning (DEVLRN '09)},
year = {2009},
month = {June},
}