The Effect of Representation and Knowledge on Goal-Directed Exploration with Reinforcement Learning Algorithms: The Proofs

S. Koenig and Reid Simmons
tech. report CMU-CS-95-177, Computer Science Department, Carnegie Mellon University, October, 1995


Download
  • Adobe portable document format (pdf) (115KB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
We analyze the complexity of on-line reinforcement learning algorithms applied to goal-directed exploration tasks. Previous work had concluded that, even in deterministic state spaces, initially uninformed reinforcement learning was at least exponential for such problems, or that it was of polynomial worst-case time-complexity only if the learning methods were augmented. We prove that, to the contrary, the algorithms are tractable with only a simple change in the reward structure ("penalizing the agent for action executions") or in the initialization of the values that they maintain. In particular, we provide tight complexity bounds for both Watkins' Q-learning and Heger's Q-hat-learning and show how their complexity depends on properties of the state spaces. We also demonstrate how one can decrease the complexity even further by either learning action models or utilizing prior knowledge of the topology of the state spaces. Our results provide guidance for empirical reinforcement learning researchers on how to distinguish hard reinforcement learning problems from easy ones and how to represent them in a way that allows them to be solved efficiently.

Notes

Text Reference
S. Koenig and Reid Simmons, "The Effect of Representation and Knowledge on Goal-Directed Exploration with Reinforcement Learning Algorithms: The Proofs," tech. report CMU-CS-95-177, Computer Science Department, Carnegie Mellon University, October, 1995

BibTeX Reference
@techreport{Simmons_1995_2997,
   author = "S. Koenig and Reid Simmons",
   title = "The Effect of Representation and Knowledge on Goal-Directed Exploration with Reinforcement Learning Algorithms: The Proofs",
   booktitle = "",
   institution = "Computer Science Department",
   month = "October",
   year = "1995",
   number= "CMU-CS-95-177",
   address= "Pittsburgh, PA",
}