Autonomous Helicopter Control using Reinforcement Learning Policy Search Methods

J. Andrew (Drew) Bagnell and Jeff Schneider
Proceedings of the International Conference on Robotics and Automation 2001, May, 2001.


Download
  • Adobe portable document format (pdf) (224KB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
Many control problems in the robotics field can be cast as Partially Observed Markovian Decision Problems (POMDPs), an optimal control formalism. Finding optimal solutions to such problems in general, however is known to be intractable. It has often been observed that in practice, simple structured controllers suffice for good sub-optimal control, and recent research in the artificial intelligence community has focused on policy search methods as techniques for finding sub-optimal controllers when such structured controllers do exist. Traditional model-based reinforcement learning algorithms make a certainty equivalence assumption on their learned models and calculate optimal policies for a maximum-likelihood Markovian model. In this work, we consider algorithms that evaluate and synthesize controllers under distributions of Markovian models. Previous work has demonstrated that algorithms that maximize mean reward with respect to \emph{model} uncertainty leads to safer and more robust controllers. We consider briefly other performance criterion that emphasize robustness and exploration in the search for controllers, and note the relation with experiment design and active learning. To validate the power of the approach on a robotic application we demonstrate the presented learning control algorithm by flying an autonomous helicopter. We show that the controller learned is robust and delivers good performance in this real-world domain.

Keywords
Helicopter Control, AUV, Reinforcement Learning, Policy Search, Robust Control, Robust Reinforcement Learning, POMDPs

Notes
Associated Center(s) / Consortia: Vision and Autonomous Systems Center
Associated Lab(s) / Group(s): Reliable Autonomous Systems Lab, Helicopter Lab, Auton Lab
Associated Project(s): Autonomous Helicopter, Auton Project, Federation of Intelligent Robotic Explorers Project

Text Reference
J. Andrew (Drew) Bagnell and Jeff Schneider, "Autonomous Helicopter Control using Reinforcement Learning Policy Search Methods," Proceedings of the International Conference on Robotics and Automation 2001, May, 2001.

BibTeX Reference
@inproceedings{Bagnell_2001_3791,
   author = "J. Andrew (Drew) Bagnell and Jeff Schneider",
   title = "Autonomous Helicopter Control using Reinforcement Learning Policy Search Methods",
   booktitle = "Proceedings of the International Conference on Robotics and Automation 2001",
   publisher = "IEEE",
   month = "May",
   year = "2001",
}