Automatically Evaluating and Generating Clear Robot Explanations - Robotics Institute Carnegie Mellon University

Automatically Evaluating and Generating Clear Robot Explanations

Master's Thesis, Tech. Report, CMU-RI-TR-17-09, Robotics Institute, Carnegie Mellon University, May, 2017

Abstract

As robots act in the environment, people observe their behaviors and form beliefs about their underlying intentions and preferences. Although people’s beliefs often affect their interactions with robots, today’s robot behaviors are rarely optimized for ease of human understanding. In this thesis, we contribute studies and algorithms to improve the transparency of robot behaviors for human observers through giving natural language-based and demonstration-based explanations. Our first studies aim to understand how people use natural language to clearly explain their goals of picking up specified blocks in a tabletop manipulation task. We find that the clearest explanations lead people through the visual search task by identifying highly salient visual features, spatial relations with explicit perspective-taking words with respect to the blocks on the table. Based on our findings, we contribute state-of-art graph-based algorithms to automatically generate clear natural language explanations similar to those found in our study, and optimize those algorithms to demonstrate that they are scalable to realistic robot manipulation tasks. In our second studies, we aim to understand features of robot demonstrations that allow people to correctly interpret and generalize robot state preferences in grid world navigation tasks. We identify critical points along a demonstrated trajectory that convey information about robot state preferences - inflection points and compromise points and contribute an approach for automatically generating trajectory demonstrations with specified numbers of critical points. We show that demonstrated trajectories with more inflection points and fewer compromise points allow observers to more clearly understand and generalize robot preferences compared to other combinations of critical points. We conclude the thesis with areas of future work that can further improve people’s understanding of robot behavior.

BibTeX

@mastersthesis{Li-2017-22816,
author = {Shen Li},
title = {Automatically Evaluating and Generating Clear Robot Explanations},
year = {2017},
month = {May},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-17-09},
keywords = {Language},
}