Variable Resolution Discretization in Optimal Control - Robotics Institute Carnegie Mellon University

Variable Resolution Discretization in Optimal Control

Remi Munos and Andrew Moore
Journal Article, Machine Learning Journal, Vol. 49, No. 2, pp. 291 - 323, November, 2002

Abstract

The problem of state abstraction is of central importance in optimal control, reinforcement learning and Markov decision processes. This paper studies the case of variable resolution state abstraction for continuous time and space, deterministic dynamic control problems in which near-optimal policies are required. We begin by defining a class of variable resolution policy and value function representations based on Kuhn triangulations embedded in a kd-trie. We then consider top-down approaches to choosing which cells to split in order to generate improved policies. The core of this paper is the introduction and evaluation of a wide variety of possible splitting criteria. We begin with local approaches based on value function and policy properties that use only features of individual cells in making split choices. Later, by introducing two new non-local measures, influence and variance, we derive splitting criteria that allow one cell to efficiently take into account its impact on other cells when deciding whether to split. Influence is an efficiently-calculable measure of the extent to which changes in some state effect the value function of some other states. Variance is an efficiently-calculable measure of how risky is some state in a Markov chain: a low variance state is one in which we would be very surprised if, during any one execution, the long-term reward attained from that state differed substantially from its expected value, given by the value function.

BibTeX

@article{Munos-2002-16685,
author = {Remi Munos and Andrew Moore},
title = {Variable Resolution Discretization in Optimal Control},
journal = {Machine Learning Journal},
year = {2002},
month = {November},
volume = {49},
number = {2},
pages = {291 - 323},
}