A general convergence method for Reinforcement Learning in the continuous case - Robotics Institute Carnegie Mellon University

A general convergence method for Reinforcement Learning in the continuous case

Remi Munos
Conference Paper, Proceedings of European Conference on Machine Learning (ECML '98), pp. 394 - 405, April, 1998

Abstract

In this paper, we propose a general method for designing convergent Reinforcement Learning algorithms in the case of continuous state space and time variables. The method is based on discretizations of the continuous process by convergent approximation schemes : the Hamilton-Jacobi-Bellman equation is replaced by a Dynamic Programming (DP) equation, for some Markovian Decision Process (MDP). If the data of the MDP were known, we could compute the value of the DP equation by using some DP updating rules. However, in the Reinforcement Learning (RL) approach, the state dynamics as well as the reinforcement functions are a priori unknown, leading impossible to use DP rules. Here we prove a general convergence theorem stating that if the values updated by some RL algorithm are close enough (in the sense that they satisfy a ''weak'' contraction property) to those of the DP, then they converge to the value function of the continuous process. The method is very general and is illustrated with a model-based algorithm build from a finite-difference approximation scheme.

BibTeX

@conference{Munos-1998-16594,
author = {Remi Munos},
title = {A general convergence method for Reinforcement Learning in the continuous case},
booktitle = {Proceedings of European Conference on Machine Learning (ECML '98)},
year = {1998},
month = {April},
pages = {394 - 405},
}