Multiple interactions made easy (mime): Large scale demonstrations data for imitation - Robotics Institute Carnegie Mellon University

Multiple interactions made easy (mime): Large scale demonstrations data for imitation

Pratyusha Sharma, Lekha Mohan, Lerrel Pinto, and Abhinav Gupta
Conference Paper, Proceedings of (CoRL) Conference on Robot Learning, pp. 906 - 915, October, 2018

Abstract

In recent years, we have seen an emergence of data-driven approaches in robotics. However, most existing efforts and datasets are either in simulation or focus on a single task in isolation such as grasping, pushing or poking. In order to make progress and capture the space of manipulation, we would need to collect a large-scale dataset of diverse tasks such as pouring, opening bottles, stacking objects etc. But how does one collect such a dataset? In this paper, we present the largest available robotic-demonstration dataset (MIME) that contains 8260 human-robot demonstrations over 20 different robotic tasks2. These tasks range from the simple task of pushing objects to the difficult task of stacking household objects. Our dataset consists of videos of human demonstrations and kinesthetic trajectories of robot demonstrations. We also propose to use this dataset for the task of mapping 3rd person video features to robot trajectories. Furthermore, we present two different approaches using this dataset and evaluate the predicted robot trajectories against ground-truth trajectories. We hope our dataset inspires research in multiple areas including visual imitation, trajectory prediction and multi-task robotic learning

BibTeX

@conference{Sharma-2018-113276,
author = {Pratyusha Sharma and Lekha Mohan and Lerrel Pinto and Abhinav Gupta},
title = {Multiple interactions made easy (mime): Large scale demonstrations data for imitation},
booktitle = {Proceedings of (CoRL) Conference on Robot Learning},
year = {2018},
month = {October},
pages = {906 - 915},
}