Barycentric Interpolator for Continuous Space and Time Reinforcement Learning

Remi Munos and Andrew Moore
Neural Information Processing Systems, December, 1998.


Download
  • Adobe portable document format (pdf) (170KB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
In order to find the optimal control of continuous state-space and time reinforcement learning (RL) problems, we approximate the value function (VF) with a particular class of functions called the barycentric interpolators. We establish sufficient conditions under which a RL algorithm converges to the optimal VF, even when we use approximate models of the state dynamics and the reinforcement functions.

Keywords
Reinforcement learning, optimal control, discretization methods

Notes
Associated Lab(s) / Group(s): Auton Lab
Associated Project(s): Auton Project
Number of pages: 7

Text Reference
Remi Munos and Andrew Moore, "Barycentric Interpolator for Continuous Space and Time Reinforcement Learning," Neural Information Processing Systems, December, 1998.

BibTeX Reference
@inproceedings{Munos_1998_2094,
   author = "Remi Munos and Andrew Moore",
   title = "Barycentric Interpolator for Continuous Space and Time Reinforcement Learning",
   booktitle = "Neural Information Processing Systems",
   publisher = "MIT Press",
   month = "December",
   year = "1998",
   volume = "11",
}