Policy Search by Dynamic Programming

J. Andrew (Drew) Bagnell, Sham Kakade, Andrew Ng, and Jeff Schneider
Neural Information Processing Systems, December, 2003.


Download
  • Adobe portable document format (pdf) (157KB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
We consider the policy search approach to reinforcement learning. We show that if a ``baseline distribution'' is given (indicating roughly how often we expect a good policy to visit each state), then we can derive a policy search algorithm that terminates in a finite number of steps, and for which we can provide non-trivial performance guarantees. We also demonstrate this algorithm on several grid-world POMDPs, a planar biped walking robot, and a double-pole balancing problem.

Keywords
Reinforcement Learning, Control, Policy Search, POMDP, partial observability, dynamic programming

Notes
Associated Lab(s) / Group(s): Auton Lab
Associated Project(s): Auton Project

Text Reference
J. Andrew (Drew) Bagnell, Sham Kakade, Andrew Ng, and Jeff Schneider, "Policy Search by Dynamic Programming," Neural Information Processing Systems, December, 2003.

BibTeX Reference
@inproceedings{Bagnell_2003_4485,
   author = "J. Andrew (Drew) Bagnell and Sham Kakade and Andrew Ng and Jeff Schneider",
   title = "Policy Search by Dynamic Programming",
   booktitle = "Neural Information Processing Systems",
   publisher = "MIT Press",
   month = "December",
   year = "2003",
   volume = "16",
}