Learning Neural Parsers with Deterministic Differentiable Imitation Learning - Robotics Institute Carnegie Mellon University

Learning Neural Parsers with Deterministic Differentiable Imitation Learning

Workshop Paper, RSS '18 Workshop on Perspectives in Robot Learning: Causality and Imitation, June, 2018

Abstract

We explore the problem of learning to decompose spatial tasks into segments, as exemplified by the problem of a painting robot covering a large object. Inspired by the ability of classical decision tree algorithms to construct structured partitions of their input spaces, we formulate the problem of decomposing objects into segments as a parsing approach. We make the insight that the derivation of a parse-tree that decomposes the object into segments closely resembles a decision tree constructed by ID3, which can be done when the ground-truth available. We learn to imitate an expert parsing oracle, such that our neural parser can generalize to parse natural images without ground truth. We introduce a novel deterministic policy gradient update, DRAG (i.e., DeteRministically AGgrevate) in the form of a deterministic actor-critic variant of AggreVaTeD [1], to train our neural parser. From another perspective, our approach is a variant of the Deterministic Policy Gradient [2, 3] suitable for the imitation learning setting. The deterministic policy representation offered by training our neural parser with DRAG allows it to outperform state of the art imitation and reinforcement learning approaches.

BibTeX

@workshop{Shankar-2018-109836,
author = {Tanmay Shankar and Nicholas Rhinehart and Katharina Muelling and Kris M. Kitani},
title = {Learning Neural Parsers with Deterministic Differentiable Imitation Learning},
booktitle = {Proceedings of RSS '18 Workshop on Perspectives in Robot Learning: Causality and Imitation},
year = {2018},
month = {June},
}