Gradient Descent for General Reinforcement Learning - Robotics Institute Carnegie Mellon University

Gradient Descent for General Reinforcement Learning

Leemon Baird and Andrew Moore
Conference Paper, Proceedings of (NeurIPS) Neural Information Processing Systems, pp. 968 - 974, November, 1998

Abstract

A simple learning rule is derived, the VAPS algorithm, which can be instantiated to generate a wide range of new reinforcement-learning algorithms. These algorithms solve a number of open problems, define several new approaches to reinforcement learning, and unify different approaches to reinforcement learning under a single theory. These algorithms all have guaranteed convergence, and include modifications of several existing algorithms that were known to fail to converge on simple MDPs. These include Q-learning, SARSA, and advantage learning. In addition to these value-based algorithms it also generates pure policy-search reinforcement-learning algorithms, which learn optimal policies without learning a value function. In addition, it allows policy-search and value-based algorithms to be combined, thus unifying two very different approaches to reinforcement learning into a single Value and Policy Search (VAPS) algorithm. And these algorithms converge for POMDPs without requiring a proper belief state. Simulations results are given, and several areas for future research are discussed.

BibTeX

@conference{Baird-1998-16688,
author = {Leemon Baird and Andrew Moore},
title = {Gradient Descent for General Reinforcement Learning},
booktitle = {Proceedings of (NeurIPS) Neural Information Processing Systems},
year = {1998},
month = {November},
pages = {968 - 974},
}