Gaze Complements Control Input for Goal Prediction During Assisted Teleoperation - Robotics Institute Carnegie Mellon University

Gaze Complements Control Input for Goal Prediction During Assisted Teleoperation

Reuben M. Aronson and Henny Admoni
Conference Paper, Proceedings of Robotics: Science and Systems (RSS '22), June, 2022

Abstract

Shared control systems can make complex robot teleoperation tasks easier for users. These systems predict the user’s goal, determine the motion required for the robot to reach that goal, and combine that motion with the user’s input. Goal prediction is generally based on the user’s control input (e.g., the joystick signal). In this paper, we show that this prediction method is especially effective when users follow standard noisily optimal behavior models. In tasks with input constraints like modal control, however, this effectiveness no longer holds, so additional sources for goal prediction can improve assistance. We implement a novel shared control system that combines natural eye gaze with joystick input to predict people’s goals online, and we evaluate our system in a real-world, COVID-safe user study. We find that modal control reduces the efficiency of assistance according to our model, and when gaze provides a prediction earlier in the task, the system’s performance improves. However, gaze on its own is unreliable and assistance using only gaze performs poorly. We conclude that control input and natural gaze serve different and complementary roles in goal prediction, and using them together leads to improved assistance.

BibTeX

@conference{Aronson-2022-131880,
author = {Reuben M. Aronson and Henny Admoni},
title = {Gaze Complements Control Input for Goal Prediction During Assisted Teleoperation},
booktitle = {Proceedings of Robotics: Science and Systems (RSS '22)},
year = {2022},
month = {June},
}