Influence and Variance of a Markov Chain: Application to Adaptive Discretization in Optimal Control - Robotics Institute Carnegie Mellon University

Influence and Variance of a Markov Chain: Application to Adaptive Discretization in Optimal Control

Remi Munos and Andrew Moore
Conference Paper, Proceedings of IEEE Conference on Decision and Control, Vol. 2, pp. 1464 - 1469, December, 1999

Abstract

This paper addresses the difficult problem of deciding where to refine the resolution of adaptive discretizations for solving continuous time-and-space deterministic optimal control problems. We introduce two measures, influence and variance of a Markov chain. Influence measures the extent to which changes of some state affect the value function at other states. Variance measures the heterogeneity of the future cumulated active rewards (whose mean is the value function). We combine these two measures to derive a nonlocal efficient splitting criterion that takes into account the impact of a state on other states when deciding whether to split. We illustrate this method on the non-linear, two dimensional "Car on the Hill" and the 4d "space-shuttle" and "airplane-meeting" control problems.

BibTeX

@conference{Munos-1999-15065,
author = {Remi Munos and Andrew Moore},
title = {Influence and Variance of a Markov Chain: Application to Adaptive Discretization in Optimal Control},
booktitle = {Proceedings of IEEE Conference on Decision and Control},
year = {1999},
month = {December},
volume = {2},
pages = {1464 - 1469},
}