Eye-Hand Behavior in Human-Robot Shared Manipulation - Robotics Institute Carnegie Mellon University

Eye-Hand Behavior in Human-Robot Shared Manipulation

Reuben M. Aronson, Thiago Santini, Thomas C. Kübler, Enkelejda Kasneci, Siddhartha Srinivasa, and Henny Admoni
Conference Paper, Proceedings of ACM/IEEE International Conference on Human-Robot Interaction (HRI '18), pp. 4 - 13, February, 2018

Abstract

Shared autonomy systems enhance people's abilities to perform activities of daily living using robotic manipulators. Recent systems succeed by first identifying their operators' intentions, typically by analyzing the user's joystick input. To enhance this recognition, it is useful to characterize people's behavior while performing such a task. Furthermore, eye gaze is a rich source of information for understanding operator intention. The goal of this paper is to provide novel insights into the dynamics of control behavior and eye gaze in human-robot shared manipulation tasks. To achieve this goal, we conduct a data collection study that uses an eye tracker to record eye gaze during a human-robot shared manipulation activity, both with and without shared autonomy assistance. We process the gaze signals from the study to extract gaze features like saccades, fixations, smooth pursuits, and scan paths. We analyze those features to identify novel patterns of gaze behaviors and highlight where these patterns are similar to and different from previous findings about eye gaze in human-only manipulation tasks. The work described in this paper lays a foundation for a model of natural human eye gaze in human-robot shared manipulation.

BibTeX

@conference{Aronson-2018-105029,
author = {Reuben M. Aronson and Thiago Santini and Thomas C. Kübler and Enkelejda Kasneci and Siddhartha Srinivasa and Henny Admoni},
title = {Eye-Hand Behavior in Human-Robot Shared Manipulation},
booktitle = {Proceedings of ACM/IEEE International Conference on Human-Robot Interaction (HRI '18)},
year = {2018},
month = {February},
pages = {4 - 13},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {human-robot interaction, eye gaze, eye tracking, shared autonomy, nonverbal communication},
}