Improved Learning of Dynamics for Control - Robotics Institute Carnegie Mellon University

Improved Learning of Dynamics for Control

Arun Venkatraman, Roberto Capobianco, Lerrel Pinto, Martial Hebert, Daniele Nardi, and J. Andrew (Drew) Bagnell
Conference Paper, Proceedings of International Symposium on Experimental Robotics (ISER '16), pp. 703 - 713, October, 2016

Abstract

Model-based reinforcement learning (MBRL) plays an important role in developing control strategies for robotic systems. However, when dealing with complex platforms, it is difficult to model systems dynamics with analytic models. While data-driven tools offer an alternative to tackle this problem, collecting data on physical systems is non-trivial. Hence, smart solutions are required to effectively learn dynamics models with small amount of examples. In this paper we present an extension to Data As Demonstrator for handling controlled dynamics in order to improve the multiple-step prediction capabilities of the learned dynamics models. Results show the efficacy of our algorithm in developing LQR, iLQR, and open-loop trajectory-based control strategies on simulated benchmarks as well as physical robot platforms

BibTeX

@conference{Venkatraman-2016-5603,
author = {Arun Venkatraman and Roberto Capobianco and Lerrel Pinto and Martial Hebert and Daniele Nardi and J. Andrew (Drew) Bagnell},
title = {Improved Learning of Dynamics for Control},
booktitle = {Proceedings of International Symposium on Experimental Robotics (ISER '16)},
year = {2016},
month = {October},
pages = {703 - 713},
}