Truncated Horizon Policy Search: Combining Reinforcement Learning and Imitation Learning - Robotics Institute Carnegie Mellon University

Truncated Horizon Policy Search: Combining Reinforcement Learning and Imitation Learning

Wen Sun, J. Andrew Bagnell, and Byron Boots
Conference Paper, Proceedings of (ICLR) International Conference on Learning Representations, April, 2018

Abstract

In this paper, we propose to combine imitation and reinforcement learning via the idea of reward shaping using an oracle. We study the effectiveness of the near-optimal cost-to-go oracle on the planning horizon and demonstrate that the cost-to-go oracle shortens the learner’s planning horizon as function of its accuracy: a globally optimal oracle can shorten the planning horizon to one, leading to a one- step greedy Markov Decision Process which is much easier to optimize, while an oracle that is far away from the optimality requires planning over a longer horizon to achieve near-optimal performance. Hence our new insight bridges the gap and interpolates between imitation learning and reinforcement learning. Motivated by the above mentioned insights, we propose Truncated HORizon Policy Search (THOR), a method that focuses on searching for policies that maximize the total reshaped reward over a finite planning horizon when the oracle is sub-optimal. We experimentally demonstrate that a gradient-based implementation of THOR can achieve superior performance compared to RL baselines and IL baselines even when the oracle is sub-optimal.

BibTeX

@conference{Sun-2018-105214,
author = {Wen Sun and J. Andrew Bagnell and Byron Boots},
title = {Truncated Horizon Policy Search: Combining Reinforcement Learning and Imitation Learning},
booktitle = {Proceedings of (ICLR) International Conference on Learning Representations},
year = {2018},
month = {April},
keywords = {Reinforcement Learning, Imitation Learning},
}