/Probabilistically Safe Policy Transfer

Probabilistically Safe Policy Transfer

David Held, Zoe McCarthy, Michael Zhang, Fred Shentu and Pieter Abbeel
Conference Paper, International Conference on Robotics and Automation (ICRA), May, 2017

Download Publication (PDF)

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract

Although learning-based methods have great potential for robotics, one concern is that a robot that updates its parametersmight cause large amounts of damage before it learns the optimal policy. We formalize the idea of safe learning in a probabilistic sense by defining an optimization problem: we desire to maximize the expected return while keeping the expected damage below a given safety limit. We study this optimization for the case of a robot manipulator with safety-based torque limits. We would like to ensure that the damage constraint is maintained at every step of the optimization and not just at convergence. To achieve this aim, we introduce a novel method which predicts how modifying the torque limit, as well as how updating the policy parameters, might affect the robot’s safety. We show through a number of experiments that our approach allows the robot to improve its performance while ensuring that the expected damage constraint is not violated during the learning process.

BibTeX Reference
@conference{Held-2017-103069,
author = {David Held and Zoe McCarthy and Michael Zhang and Fred Shentu and Pieter Abbeel},
title = {Probabilistically Safe Policy Transfer},
booktitle = {International Conference on Robotics and Automation (ICRA)},
year = {2017},
month = {May},
}
2018-02-07T14:58:49+00:00