Inferring Goals with Gaze during Teleoperated Manipulation - Robotics Institute Carnegie Mellon University

Inferring Goals with Gaze during Teleoperated Manipulation

Reuben M. Aronson, Nadia AlMutlak, and Henny Admoni
Conference Paper, Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems, September, 2021

Abstract

Assistive robot manipulators help people with upper motor impairments perform tasks by themselves. However, teleoperating a robot to perform complex tasks is difficult. Shared control algorithms make this easier: these algorithms predict the user's goal, autonomously generate a plan to accomplish the goal, and fuse that plan with the user's input. To accurately predict the user's goal, these algorithms typically use the user's input command (e.g., joystick input) directly. We use another sensing modality: the user's natural eye gaze behavior, which is highly task-relevant and informative early in the task. We develop an algorithm using hidden Markov models to infer goals from natural eye gaze behavior that appears while users are teleoperating a robot. We show that gaze-based predictions outperform goal prediction based on the control input and that our sequence model improves the prediction quality relative to gaze-based aggregate models.

BibTeX

@conference{Aronson-2021-128469,
author = {Reuben M. Aronson and Nadia AlMutlak and Henny Admoni},
title = {Inferring Goals with Gaze during Teleoperated Manipulation},
booktitle = {Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems},
year = {2021},
month = {September},
keywords = {Human-Robot Interaction, Physically Assistive Devices, Telerobotics and Teleoperation, Eye Gaze},
}