Learning Visual Representations for Interactive Systems - Robotics Institute Carnegie Mellon University

Learning Visual Representations for Interactive Systems

Justus Piater, Sebastien Jodogne, Renaud Detry, Dirk Kraft, Norbert Kruger, Oliver Kroemer, and Jan Peters
Conference Paper, Proceedings of 14th International Symposium on Robotics Research (ISRR '09), pp. 399 - 416, August, 2009

Abstract

We describe two quite different methods for associating action parameters to visual percepts. Our RLVC algorithm performs reinforcement learning directly on the visual input space. To make this very large space manageable, RLVC interleaves the reinforcement learner with a supervised classification algorithm that seeks to split perceptual states so as to reduce perceptual aliasing. This results in an adaptive discretization of the perceptual space based on the presence or absence of visual features. Its extension RLJC also handles continuous action spaces. In contrast to the minimalistic visual representations produced by RLVC and RLJC, our second method learns structural object models for robust object detection and pose estimation by probabilistic inference. To these models, the method associates grasp experiences autonomously learned by trial and error. These experiences form a nonparametric representation of grasp success likelihoods over gripper poses, which we call a grasp density. Thus, object detection in a novel scene simultaneously produces suitable grasping options.

BibTeX

@conference{Piater-2009-112229,
author = {Justus Piater and Sebastien Jodogne and Renaud Detry and Dirk Kraft and Norbert Kruger and Oliver Kroemer and Jan Peters},
title = {Learning Visual Representations for Interactive Systems},
booktitle = {Proceedings of 14th International Symposium on Robotics Research (ISRR '09)},
year = {2009},
month = {August},
pages = {399 - 416},
}