Nonverbal Communication in Socially Assistive Human-Robot Interaction - Robotics Institute Carnegie Mellon University

Nonverbal Communication in Socially Assistive Human-Robot Interaction

Miscellaneous, PhD Thesis, Yale University, May, 2016

Abstract

Socially assistive robots provide assistance to human users through interactions that are inherently social. Socially assistive robots include robot tutors that instruct students through personalized one-on-one lessons [197], robot therapy assistants that help mediate social interactions between children with developmental disorders and adult therapists [216], and robot coaches that motivate children to make healthy eating choices [222].

To succeed in their role of social assistance, these robots must be capable of natural communication with people. Natural communication is multimodal, with both verbal channels (i.e., speech) and nonverbal channels (e.g., eye gaze, gestures, and other behaviors).

This dissertation focuses on enabling human-robot communication by building models for understanding human nonverbal behavior and generating robot nonverbal behavior in socially assistive domains. It investigates how to computationally model eye gaze and other nonverbal behaviors so that these behaviors can be used by socially assistive robots to improve human-robot collaboration.

Developing effective nonverbal communication for robots engages a number of disciplines including autonomous control, machine learning, computer vision, design, and cognitive psychology. This dissertation contributes across all of these disciplines, providing a greater understanding of the computational and human requirements for successful human-robot interactions.

To focus nonverbal communication models on the features that most strongly influence human-robot interactions, I first conducted a series of studies that draw out human responses to specific robot nonverbal behaviors. These carefully controlled laboratory-based studies investigate how robot eye gaze compares to human eye gaze in eliciting reflexive attention shifts from human viewers; how different features of robot gaze behavior promote the perception of a robot’s attention toward a viewer; whether people use robot eye gaze to support verbal object references and how they resolve conflicts in this multimodal communication; and what is the role of eye gaze and gesture in guiding behavior during human-robot collaboration.

Based on this understanding of nonverbal communication between people and robots, I develop a set of models for understanding and generating nonverbal behavior in human-robot interactions. The first model uses a data-driven approach based in the domain of tutoring. It is trained on examples from human-human behavior, in which a teacher instructs a student about a map-based board game. This model can predict the context of a communication from a new observation of nonverbal behavior, as well as suggest appropriate nonverbal behaviors to support a desired context.

The second model takes a scene-based approach to generate nonverbal behavior for a socially assistive robot. This model is context independent and does not rely on a priori collection and annotation of human examples, as the first model does. Instead, it calculates how a user will perceive a visual scene from their own perspective based on cognitive psychology principles, and it then selects the best robot nonverbal
behavior to direct the user’s attention based on this predicted perception. The model can be flexibly applied to a range of scenes and a variety of robots with different physical capabilities. I show that this second model performs well in both a targeted evaluation and in a naturalistic human-robot collaborative interaction.

BibTeX

@misc{Admoni-2016-113262,
author = {Henny Admoni},
title = {Nonverbal Communication in Socially Assistive Human-Robot Interaction},
booktitle = {PhD Thesis, Yale University},
month = {May},
year = {2016},
}