The Effect of Representation and Knowledge on Goal-Directed Exploration with Reinforcement Learning Algorithms

S. Koenig and Reid Simmons
Journal Article, Machine Learning Journal, pp. 227 - 250, January, 1996

View Publication

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.


We analyze the complexity of on-line reinforcement learning algorithms applied to goal-directed exploration tasks. Previous work had concluded that, even in deterministic state spaces, initially uninformed reinforcement learning was at least exponential for such problems, or that it was of polynomial worst-case time-complexity only if the learning methods were augmented. We prove that, to the contrary, the algorithms are tractable with only a simple change in the reward structure (“penalizing the agent for action executions”) or in the initialization of the values that they maintain. In particular, we provide tight complexity bounds for both Watkins’ Q-learning and Heger’s Q-hat-learning and show how their complexity depends on properties of the state spaces. We also demonstrate how one can decrease the complexity even further by either learning action models or utilizing prior knowledge of the topology of the state spaces. Our results provide guidance for empirical reinforcement learning researchers on how to distinguish hard reinforcement learning problems from easy ones and how to represent them in a way that allows them to be solved efficiently.

author = {S. Koenig and Reid Simmons},
title = {The Effect of Representation and Knowledge on Goal-Directed Exploration with Reinforcement Learning Algorithms},
journal = {Machine Learning Journal},
year = {1996},
month = {January},
pages = {227 - 250},
} 2017-09-13T10:46:45-04:00