Efficient Model Learning from Joint-Action Demonstrations for Human-Robot Collaborative Tasks - Robotics Institute Carnegie Mellon University

Efficient Model Learning from Joint-Action Demonstrations for Human-Robot Collaborative Tasks

Stefanos Nikolaidis, Ramya Ramakrishnan, Keren Gu, and Julie Shah
Conference Paper, Proceedings of 10th Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI '15), pp. 189 - 196, March, 2015

Abstract

We present a framework for automatically learning human user models from joint-action demonstrations that enables a robot to compute a robust policy for a collaborative task with a human. First, the demonstrated action sequences are clustered into different human types using an unsupervised learning algorithm. A reward function is then learned for each type through the employment of an inverse reinforcement learning algorithm. The learned model is then incorporated into a mixed-observability Markov decision process (MOMDP) formulation, wherein the human type is a partially observable variable. With this framework, we can infer online the human type of a new user that was not included in the training set, and can compute a policy for the robot that will be aligned to the preference of this user. In a human subject experiment (n=30), participants agreed more strongly that the robot anticipated their actions when working with a robot incorporating the proposed framework (p

Notes
Best Enabling Technology Award

BibTeX

@conference{Nikolaidis-2015-5931,
author = {Stefanos Nikolaidis and Ramya Ramakrishnan and Keren Gu and Julie Shah},
title = {Efficient Model Learning from Joint-Action Demonstrations for Human-Robot Collaborative Tasks},
booktitle = {Proceedings of 10th Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI '15)},
year = {2015},
month = {March},
pages = {189 - 196},
}