Models of Trust in Human Control of Swarms with Varied Levels of Autonomy - Robotics Institute Carnegie Mellon University

Models of Trust in Human Control of Swarms with Varied Levels of Autonomy

Changjoo Nam, Phillip Walker, Huao Li, Michael Lewis, and Katia Sycara
Journal Article, IEEE Transactions on Human-Machine Systems, Vol. 50, No. 3, pp. 194 - 204, June, 2020

Abstract

In this paper, we study human trust and its computational models in supervisory control of swarm robots with varied levels of autonomy (LOA) in a target foraging task. We implement three LOAs: manual, mixed-initiative (MI), and fully autonomous LOA. While the swarm in the MI LOA is controlled by a human operator and an autonomous search algorithm collaboratively, the swarms in the manual and autonomous LOAs are fully directed by the human and the search algorithm, respectively. From user studies, we find that humans tend to make their decisions based on physical characteristics of the swarm rather than its performance since the task performance of swarms is not clearly perceivable by humans. Based on the analysis, we formulate trust as a Markov decision process whose state space includes the factors affecting trust. We develop variations of the trust model for different LOAs. We employ an inverse reinforcement learning algorithm to learn behaviors of the operator from demonstrations where the learned behaviors are used to predict human trust. Compared to an existing model, our models reduce the prediction error by at most 39.6%, 36.5%, and 28.8% in the manual, MI, and auto-LOA, respectively.

BibTeX

@article{Nam-2020-120811,
author = {Changjoo Nam and Phillip Walker and Huao Li and Michael Lewis and Katia Sycara},
title = {Models of Trust in Human Control of Swarms with Varied Levels of Autonomy},
journal = {IEEE Transactions on Human-Machine Systems},
year = {2020},
month = {June},
volume = {50},
number = {3},
pages = {194 - 204},
}