Home/Towards Safe Reinforcement Learning in the Real World

Towards Safe Reinforcement Learning in the Real World

Edward Ahn
Master's Thesis, Tech. Report, CMU-RI-TR-19-56, July, 2019

Download Publication

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.


Control for mobile robots in slippery, rough terrain at high speeds is difficult. One approach to designing controllers for complex, non-uniform dynamics in unstructured environments is to use model-free learning-based methods. However, these methods often lack the necessary notion of safety which is needed to deploy these controllers without danger, and hence have rarely been tested successfully on real world tasks. In this work, we present methods and techniques that allow model-free learning-based methods to learn low-level controllers for mobile robot navigation while minimizing violations to user-defined safety constraints. We show these learned controllers working robustly in both simulation and in the real world on a 1:10 scale RC car, as well as a full-size vehicle called the MRZR.

author = {Edward Ahn},
title = {Towards Safe Reinforcement Learning in the Real World},
year = {2019},
month = {July},
school = {},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-19-56},
} 2019-08-14T11:07:52-04:00