The Effect of Representation and Knowledge on Goal-Directed Exploration with Reinforcement-Learning Algorithms - Robotics Institute Carnegie Mellon University

The Effect of Representation and Knowledge on Goal-Directed Exploration with Reinforcement-Learning Algorithms

Sven Koenig and Reid Simmons
Journal Article, Machine Learning, Vol. 22, No. 1, pp. 227 - 250, 1996

Abstract

We analyze the complexity of on-line reinforcement learning algorithms applied to goal-directed exploration tasks. Previous work had concluded that, even in deterministic state spaces, initially uninformed reinforcement learning was at least exponential for such problems, or that it was of polynomial worst-case time-complexity only if the learning methods were augmented. We prove that, to the contrary, the algorithms are tractable with only a simple change in the reward structure (penalizing the agent for action executions) or in the initialization of the values that they maintain. In particular, we provide tight complexity bounds for both Watkins' Q-learning and Heger's Q-hat-learning and show how their complexity depends on properties of the state spaces. We also demonstrate how one can decrease the complexity even further by either learning action models or utilizing prior knowledge of the topology of the state spaces. Our results provide guidance for empirical reinforcement learning researchers on how to distinguish hard reinforcement learning problems from easy ones and how to represent them in a way that allows them to be solved efficiently.

Notes
Appeared also as a book chapter in: Recent Advances in Reinforcement Learning, L. Kaelbling (editor), Kluwer Academic Publishers, 1996.

BibTeX

@article{Koenig-1996-16346,
author = {Sven Koenig and Reid Simmons},
title = {The Effect of Representation and Knowledge on Goal-Directed Exploration with Reinforcement-Learning Algorithms},
journal = {Machine Learning},
year = {1996},
month = {January},
volume = {22},
number = {1},
pages = {227 - 250},
}