Learning Manipulation Graphs from Demonstrations Using Multimodal Sensory Signals - Robotics Institute Carnegie Mellon University

Learning Manipulation Graphs from Demonstrations Using Multimodal Sensory Signals

Zhe Su, Oliver Kroemer, Gerald E. Loeb, Gaurav S. Sukhatme, and Stefan Schaal
Conference Paper, Proceedings of (ICRA) International Conference on Robotics and Automation, pp. 2758 - 2765, May, 2018

Abstract

Complex contact manipulation tasks can be decomposed into sequences of motor primitives. Individual primitives often end with a distinct contact state, such as inserting a screwdriver tip into a screw head or loosening it through twisting. To achieve robust execution, the robot should be able to verify that the primitive’s goal has been reached as well as disambiguate it from erroneous contact states. In this paper, we introduce and evaluate a framework to autonomously construct manipulation graphs from manipulation demonstrations. Our manipulation graphs include sequences of motor primitives for performing a manipulation task as well as corresponding contact state information. The sensory models for the contact states allow the robot to verify the goal of each motor primitive as well as detect erroneous contact changes. The proposed framework was experimentally evaluated on grasping, unscrewing, and insertion tasks on a Barrett arm and hand equipped with two BioTacs. The results of our experiments indicate that the learned manipulation graphs achieve more robust manipulation executions by confirming sensory goals as well as discovering and detecting novel failure modes.

BibTeX

@conference{Su-2018-112296,
author = {Zhe Su and Oliver Kroemer and Gerald E. Loeb and Gaurav S. Sukhatme and Stefan Schaal},
title = {Learning Manipulation Graphs from Demonstrations Using Multimodal Sensory Signals},
booktitle = {Proceedings of (ICRA) International Conference on Robotics and Automation},
year = {2018},
month = {May},
pages = {2758 - 2765},
}