Nonparametric Representation of Policies and Value Functions: A Trajectory-Based Approach

Chris Atkeson and Jun Morimoto
Neural Information Processing Systems 2002, 2002.


Download
  • Adobe portable document format (pdf) (265KB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
A longstanding goal of reinforcement learning is to develop nonparametric representations of policies and value functions that support rapid learning without suffering from interference or the curse of dimensionality. We have developed a trajectory-based approach, in which policies and value functions are represented nonparametrically along trajectories. These trajectories, policies, and value functions are updated as the value function becomes more accurate or as a model of the task is updated. We have applied this approach to periodic tasks such as hopping and walking, which required handling discount factors and discontinuities in the task dynamics, and using function approximation to represent value functions at discontinuities. We also describe extensions of the approach to make the policies more robust to modeling error and sensor noise.

Notes
Associated Project(s): Dynamic Biped
Number of pages: 8

Text Reference
Chris Atkeson and Jun Morimoto, "Nonparametric Representation of Policies and Value Functions: A Trajectory-Based Approach," Neural Information Processing Systems 2002, 2002.

BibTeX Reference
@inproceedings{Atkeson_2002_5598,
   author = "Chris Atkeson and Jun Morimoto",
   title = "Nonparametric Representation of Policies and Value Functions: A Trajectory-Based Approach",
   booktitle = "Neural Information Processing Systems 2002",
   year = "2002",
}