Policy Transfer via Modularity and Reward Guiding - Robotics Institute Carnegie Mellon University

Policy Transfer via Modularity and Reward Guiding

Ignasi Clavera, David Held, and Pieter Abbeel
Conference Paper, Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1537 - 1544, September, 2017

Abstract

Non-prehensile manipulation, such as pushing, is an important function for robots to move objects and is sometimes preferred as an alternative to grasping. However, due to unknown frictional forces, pushing has been proven a difficult task for robots. We explore the use of reinforcement learning to train a robot to robustly push an object. In order to deal with the sample complexity of training such a method, we train the pushing policy in simulation and then transfer this policy to the real world. In order to ease the transfer from simulation, we propose to use modularity to separate the learned policy from the raw inputs and outputs; rather than training “end-to-end,” we decompose our system into modules and train only a subset of these modules in simulation. We further demonstrate that we can incorporate prior knowledge about the task into the state space and the reward function to speed up convergence. Finally, we introduce ”reward guiding” to modify the reward function and further reduce the training time. We demonstrate, in both simulation and real-world experiments, that such an approach can be used to reliably push an object from many initial positions and orientations.

Notes
Assocaited Lab - Robots Perceiving and Doing

BibTeX

@conference{Held-2017-102823,
author = {Ignasi Clavera and David Held and Pieter Abbeel},
title = {Policy Transfer via Modularity and Reward Guiding},
booktitle = {Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems},
year = {2017},
month = {September},
pages = {1537 - 1544},
keywords = {reinforcement learning, object manipulation, transfer},
}