Policy Search by Dynamic Programming - Robotics Institute Carnegie Mellon University

Policy Search by Dynamic Programming

J. Andrew (Drew) Bagnell, Sham Kakade, Andrew Ng, and Jeff Schneider
Conference Paper, Proceedings of (NeurIPS) Neural Information Processing Systems, pp. 831 - 838, December, 2003

Abstract

We consider the policy search approach to reinforcement learning. We show that if a ``baseline distribution'' is given (indicating roughly how often we expect a good policy to visit each state), then we can derive a policy search algorithm that terminates in a finite number of steps, and for which we can provide non-trivial performance guarantees. We also demonstrate this algorithm on several grid-world POMDPs, a planar biped walking robot, and a double-pole balancing problem.

BibTeX

@conference{Bagnell-2003-8823,
author = {J. Andrew (Drew) Bagnell and Sham Kakade and Andrew Ng and Jeff Schneider},
title = {Policy Search by Dynamic Programming},
booktitle = {Proceedings of (NeurIPS) Neural Information Processing Systems},
year = {2003},
month = {December},
pages = {831 - 838},
publisher = {MIT Press},
keywords = {Reinforcement Learning, Control, Policy Search, POMDP, partial observability, dynamic programming},
}