Probabilistically Safe Policy Transfer - Robotics Institute Carnegie Mellon University

Probabilistically Safe Policy Transfer

David Held, Zoe McCarthy, Michael Zhang, Fred Shentu, and Pieter Abbeel
Conference Paper, Proceedings of (ICRA) International Conference on Robotics and Automation, pp. 5798 - 5805, May, 2017

Abstract

Although learning-based methods have great potential for robotics, one concern is that a robot that updates its parametersmight cause large amounts of damage before it learns the optimal policy. We formalize the idea of safe learning in a probabilistic sense by defining an optimization problem: we desire to maximize the expected return while keeping the expected damage below a given safety limit. We study this optimization for the case of a robot manipulator with safety-based torque limits. We would like to ensure that the damage constraint is maintained at every step of the optimization and not just at convergence. To achieve this aim, we introduce a novel method which predicts how modifying the torque limit, as well as how updating the policy parameters, might affect the robot's safety. We show through a number of experiments that our approach allows the robot to improve its performance while ensuring that the expected damage constraint is not violated during the learning process.

BibTeX

@conference{Held-2017-103069,
author = {David Held and Zoe McCarthy and Michael Zhang and Fred Shentu and Pieter Abbeel},
title = {Probabilistically Safe Policy Transfer},
booktitle = {Proceedings of (ICRA) International Conference on Robotics and Automation},
year = {2017},
month = {May},
pages = {5798 - 5805},
}