Towards Interpretable Reinforcement Learning: Interactive Visualizations to Increase Insight - Robotics Institute Carnegie Mellon University

Towards Interpretable Reinforcement Learning: Interactive Visualizations to Increase Insight

Shuby Deshpande
Master's Thesis, Tech. Report, CMU-RI-TR-20-61, Robotics Institute, Carnegie Mellon University, December, 2020

Abstract

Visualization tools for supervised learning (SL) allow users to interpret, introspect, and gain an intuition for the successes and failures of their models. While reinforcement learning (RL) practitioners ask many of the same questions while debugging agent policies, existing tools are not a good fit for the RL setting as these tools address challenges typically found in the SL regime. Whereas SL involves a static dataset, RL often entails collecting new data in challenging environments with partial observability, stochasticity, and non-stationary data distributions. These unique characteristic of the RL framework necessitate the creation of alternate visual interfaces to help us better understand agent policies trained using RL. In this work, we design and implement an interactive visualization tool for debugging and interpreting RL. Our system identifies and addresses important aspects missing from existing tools and we provide an example workflow of how this system could be used, along with ideas for future extensions. We explain one such extension under development to increase insight into the learning dynamics of actor-critic learning algorithms by visualizing optimization landscapes.

BibTeX

@mastersthesis{Deshpande-2020-125770,
author = {Shuby Deshpande},
title = {Towards Interpretable Reinforcement Learning: Interactive Visualizations to Increase Insight},
year = {2020},
month = {December},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-20-61},
keywords = {Reinforcement Learning, Interpretability, Visualization},
}