A Non-Parametric Approach to Dynamic Programming

Oliver Kroemer and Jan Peters
Conference Paper, Neural Information Processing Systems (NIPS), January, 2011

Download Publication

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.


In this paper, we consider the problem of policy evaluation for continuous state systems. We present a non-parametric approach to policy evaluation, which uses kernel density estimation to represent the system. The true form of the value function for this model can be determined, and can be computed using Galerkin’s method. Furthermore, we also present a unified view of several well-known policy evaluation methods. In particular, we show that the same Galerkin method can be used to derive Least-Squares Temporal Difference learning, Kernelized Temporal Difference learning, and a discrete-state Dynamic Programming solution, as well as our proposed method. In a numerical evaluation of these algorithms, the proposed approach performed better than the other methods.

author = {Oliver Kroemer and Jan Peters},
title = {A Non-Parametric Approach to Dynamic Programming},
booktitle = {Neural Information Processing Systems (NIPS)},
year = {2011},
month = {January},
} 2019-03-12T13:01:05-04:00