Semantic Gaze Labeling for Human-Robot Shared Manipulation - Robotics Institute Carnegie Mellon University

Semantic Gaze Labeling for Human-Robot Shared Manipulation

Reuben M. Aronson and Henny Admoni
Conference Paper, Proceedings of 11th ACM Symposium on Eye Tracking Research and Applications (ETRA '19), June, 2019

Abstract

Human-robot collaboration systems benefit from recognizing people’s intentions. This capability is especially useful for collaborative manipulation applications, in which users operate robot arms to manipulate objects. For collaborative manipulation, systems can determine users’ intentions by tracking eye gaze and identifying gaze fixations on particular objects in the scene (i.e., semantic gaze labeling). Translating 2D fixation locations (from eye trackers) into 3D fixation locations (in the real world) is a technical challenge. One approach is to assign each fixation to the object closest to it. However, calibration drift, head motion, and the extra dimension required for real-world interactions make this position matching approach inaccurate. In this work, we introduce velocity features that compare the relative motion between subsequent gaze fixations and a finite set of known points and assign fixation position to one of those known points. We validate our approach on synthetic data to demonstrate that classifying using velocity features is more robust than a position matching approach. In addition, we show that a classifier using velocity features improves semantic labeling on a real-world dataset of human-robot assistive manipulation interactions.

BibTeX

@conference{Aronson-2019-112659,
author = {Reuben M. Aronson and Henny Admoni},
title = {Semantic Gaze Labeling for Human-Robot Shared Manipulation},
booktitle = {Proceedings of 11th ACM Symposium on Eye Tracking Research and Applications (ETRA '19)},
year = {2019},
month = {June},
publisher = {ACM},
keywords = {eye tracking, intention recognition, semantic gaze labeling, human-robot interaction, assistive robotics},
}