Home/Improved Learing of Dynamics for Control

Improved Learing of Dynamics for Control

Arun Venkatraman, Roberto Capobianco, Lerrel Pinto, Martial Hebert, Daniele Nardi and J. Andrew (Drew) Bagnell
International Symposium on Experimental Robotics, October, 2016

Download Publication (PDF)

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.


Model-based reinforcement learning (MBRL) plays an important role in developing control strategies for robotic systems. However, when dealing with complex platforms, it is difficult to model systems dynamics with analytic models. While data-driven tools offer an alternative to tackle this problem, collecting data on physical systems is non-trivial. Hence, smart solutions are required to effectively learn dynamics models with small amount of examples. In this paper we present an extension to Data As Demonstrator for handling controlled dynamics in order to improve the multiple-step prediction capabilities of the learned dynamics models. Results show the efficacy of our algorithm in developing LQR, iLQR, and open-loop trajectory-based control strategies on simulated benchmarks as well as physical robot platforms.

BibTeX Reference
title = {Improved Learing of Dynamics for Control},
author = {Arun Venkatraman and Roberto Capobianco and Lerrel Pinto and Martial Hebert and Daniele Nardi and J. Andrew (Drew) Bagnell},
booktitle = {International Symposium on Experimental Robotics},
month = {October},
year = {2016},