Stable Function Approximation in Dynamic Programming

Geoffrey Gordon
Proceedings of IMCL '95, 1995.


Download
  • Adobe portable document format (pdf) (169KB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
The success of reinforcement learning in practical problems depends on the ability to combine function approximation with temporal difference methods such as value iteration. Experiments in this area have produced mixed results; there have been both notable successes and notable disappointments. Theory has been scarce, mostly due to the difficulty of reasoning about function approximators that generalize beyond the observed data. We provide a proof of convergence for a wide class of temporal difference methods involving function approximators such as k-nearest-neighbor, and show experimentally that these methods can be useful. The proof is based on a view of function approximators as expansion or contraction mappings. In addition, we present a novel view of fitted value iteration: an approximate algorithm for one environment turns out to be an exact algorithm for a different environment.

Notes
Associated Lab(s) / Group(s): Auton Lab
Associated Project(s): Auton Project

Text Reference
Geoffrey Gordon, "Stable Function Approximation in Dynamic Programming," Proceedings of IMCL '95, 1995.

BibTeX Reference
@inproceedings{Gordon_1995_2893,
   author = "Geoffrey Gordon",
   title = "Stable Function Approximation in Dynamic Programming",
   booktitle = "Proceedings of IMCL '95",
   year = "1995",
}