Learning Neural Parsers with Deterministic Differentiable Imitation Learning - Robotics Institute Carnegie Mellon University

Learning Neural Parsers with Deterministic Differentiable Imitation Learning

Master's Thesis, Tech. Report, CMU-RI-TR-18-44, Robotics Institute, Carnegie Mellon University, August, 2018

Abstract

We explore the problem of learning to decompose spatial tasks into segments, as exemplified by the problem of a painting robot covering a large object. Inspired by the ability of classical decision tree algorithms to construct structured partitions of their input spaces, we formulate the problem of decomposing objects into segments as a parsing approach. We make the insight that the derivation of a parse-tree that decomposes the object into segments closely resembles a decision tree constructed by ID3, which can be done when the ground-truth available.

We learn to imitate an expert parsing oracle, such that our neural parser can generalize to parse natural images without ground truth. We introduce a novel deterministic policy gradient update, DRAG, in the form of a deterministic actor-critic variant of AggreVaTeD (Sun et. al. 2017), to train our neural parser. Alternatively, our approach may be seen as a variant of the Deterministic Policy Gradient (Silver et. al., 2014) suitable for the imitation learning setting. Training our neural parser via DRAG allows it to outperform several existing imitation and reinforcement learning approaches.

BibTeX

@mastersthesis{Shankar-2018-107283,
author = {Tanmay Shankar},
title = {Learning Neural Parsers with Deterministic Differentiable Imitation Learning},
year = {2018},
month = {August},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-18-44},
keywords = {Imitation Learning, Reinforcement Learning, Policy Gradients, Parsing},
}