Variable Resolution Reinforcement Learning - Robotics Institute Carnegie Mellon University

Variable Resolution Reinforcement Learning

Andrew Moore
Tech. Report, CMU-RI-TR-95-19, Robotics Institute, Carnegie Mellon University, April, 1995

Abstract

Can reinforcement learning ever become a practical method for real control problems? This paper begins by reviewing three reinforcement learning algorithms to study their shortcomings and to motivate subsequent improvements. By assuming that paths must be continuous, we can substantially reduce the proportion of state space which the learning algorithms need explore. Next, we introduce the partigame algorithm for variable resolution reinforcement learning. In addition to exploring state space, and developing a control policy to achieve a task, partigame also learns a kd-tree partitioning of state space. Some experiments are described which show partigame in operation on a non-linear dynamics problems and a path learning/planning task in a 9-dimensional configuration space.

BibTeX

@techreport{Moore-1995-13869,
author = {Andrew Moore},
title = {Variable Resolution Reinforcement Learning},
year = {1995},
month = {April},
institute = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-95-19},
}