Reinforcement Learning for Continuous Stochastic Control Problems

Remi Munos and Paul Bourgine
Neural Information Processing Systems, 1997.


Download
  • Adobe portable document format (pdf) (454KB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
This paper is concerned with the problem of Reinforcement Learning (RL) for continuous state space and time stochastic control problems. We state the Hamilton-Jacobi-Bellman equation satisfied by the value function and use a Finite-Difference method for designing a convergent approximation scheme. Then we propose a RL algorithm based on this scheme and prove its convergence to the optimal solution.

Notes
Associated Lab(s) / Group(s): Auton Lab
Associated Project(s): Auton Project
Number of pages: 7

Text Reference
Remi Munos and Paul Bourgine, "Reinforcement Learning for Continuous Stochastic Control Problems," Neural Information Processing Systems, 1997.

BibTeX Reference
@inproceedings{Munos_1997_2942,
   author = "Remi Munos and Paul Bourgine",
   title = "Reinforcement Learning for Continuous Stochastic Control Problems",
   booktitle = "Neural Information Processing Systems",
   year = "1997",
}