Variable Resolution Dynamic Programming: Efficiently Learning Action Maps in Multivariate Real-valued State-spaces - Robotics Institute Carnegie Mellon University

Variable Resolution Dynamic Programming: Efficiently Learning Action Maps in Multivariate Real-valued State-spaces

Andrew Moore
Conference Paper, Proceedings of (ICML) International Conference on Machine Learning, pp. 333 - 337, June, 1991

Abstract

An effective method to create an autonomous reactive controller is to learn a model of the environment and then use dynamic programming to derive a policy to maximize long term reward. Neither learning environmental models nor dynamic programming require parametric assumptions about the world, and so learning can proceed with no danger of becoming “stuck― by a mismatch between the parametric assumptions and reality. The paper discusses how such an approach can be realized in real valued multivariate state spaces in which straightforward discretization falls prey to the curse of dimensionality.

BibTeX

@conference{Moore-1991-15826,
author = {Andrew Moore},
title = {Variable Resolution Dynamic Programming: Efficiently Learning Action Maps in Multivariate Real-valued State-spaces},
booktitle = {Proceedings of (ICML) International Conference on Machine Learning},
year = {1991},
month = {June},
editor = {L. Birnbaum and G. Collins},
pages = {333 - 337},
publisher = {Morgan Kaufmann},
}