Variable Resolution Reinforcement Learning

Andrew Moore
tech. report CMU-RI-TR-95-19, Robotics Institute, Carnegie Mellon University, May, 1995


Download
  • Adobe portable document format (pdf) (459KB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
Can reinforcement learning ever become a practical method for real control problems? This paper begins by reviewing three reinforcement learning algorithms to study their shortcomings and to motivate subsequent improvements. By assuming that paths must be continuous, we can substantially reduce the proportion of state space which the learning algorithms need explore. Next, we introduce the partigame algorithm for variable resolution reinforcement learning. In addition to exploring state space, and developing a control policy to achieve a task, partigame also learns a kd-tree partitioning of state space. Some experiments are described which show partigame in operation on a non-linear dynamics problems and a path learning/planning task in a 9-dimensional configuration space.

Notes
Grant ID: F33615-93-1-1330
Number of pages: 21

Text Reference
Andrew Moore, "Variable Resolution Reinforcement Learning," tech. report CMU-RI-TR-95-19, Robotics Institute, Carnegie Mellon University, May, 1995

BibTeX Reference
@techreport{Moore_1995_378,
   author = "Andrew Moore",
   title = "Variable Resolution Reinforcement Learning",
   booktitle = "",
   institution = "Robotics Institute",
   month = "May",
   year = "1995",
   number= "CMU-RI-TR-95-19",
   address= "Pittsburgh, PA",
}