Verbal Explanations for Deep Reinforcement Learning Neural Networks with Attention on Extracted Features - Robotics Institute Carnegie Mellon University

Verbal Explanations for Deep Reinforcement Learning Neural Networks with Attention on Extracted Features

Xinzhi Wang, Shengcheng Yuan, Hui Zhang, Michael Lewis, and Katia Sycara
Conference Paper, Proceedings of 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN '19), October, 2019

Abstract

In recent years, there has been increasing interest in transparency in Deep Neural Networks. Most of the works on transparency have been done for image classification. In this paper, we report on work of transparency in Deep Reinforcement Learning Networks (DRLNs). Such networks have been extremely successful in learning action control in Atari games. In this paper, we focus on generating verbal (natural language) descriptions and explanations of deep reinforcement learning policies. Successful generation of verbal explanations would allow better understanding by people (e.g., users, debuggers) of the inner workings of DRLNs which could ultimately increase trust in these systems. We present a generation model which consists of three parts: an encoder on feature extraction, an attention structure on selecting features from the output of the encoder, and a decoder on generating the explanation in natural language. Four variants of the attention structure full attention, global attention, adaptive attention and object attention - are designed and compared. The adaptive attention structure performs the best among all the variants, even though the object attention structure is given additional information on object locations. Additionally, our experiment results showed that the proposed encoder outperforms two baseline encoders (Resnet and VGG) on the capability of distinguishing the game state images.

BibTeX

@conference{Wang-2019-120825,
author = {Xinzhi Wang and Shengcheng Yuan and Hui Zhang and Michael Lewis and Katia Sycara},
title = {Verbal Explanations for Deep Reinforcement Learning Neural Networks with Attention on Extracted Features},
booktitle = {Proceedings of 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN '19)},
year = {2019},
month = {October},
}