Teaching Robots to Predict Human Motion - Robotics Institute Carnegie Mellon University

Teaching Robots to Predict Human Motion

Liang-Yan Gui, Kevin Zhang, Yuxiong Wang, Xiaodan Liang, Jose M. F. Moura, and Manuela Veloso
Conference Paper, Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 562 - 567, October, 2018

Abstract

Teaching a robot to predict and mimic how a human moves or acts in the near future by observing a series of historical human movements is a crucial first step in human-robot interaction and collaboration. In this paper, we instrument a robot with such a prediction ability by leveraging recent deep learning and computer vision techniques. First, our system takes images from the robot camera as input to produce the corresponding human skeleton based on real-time human pose estimation obtained with the OpenPose library. Then, conditioning on this historical sequence, the robot forecasts plausible motion through a motion predictor, generating a corresponding demonstration.

Because of a lack of high-level fidelity validation, existing forecasting algorithms suffer from error accumulation and inaccurate prediction. Inspired by generative adversarial networks (GANs), we introduce a global discriminator that examines whether the predicted sequence is smooth and realistic. Our resulting motion GAN model achieves superior prediction performance to state-of-the-art approaches when evaluated on the standard H3.6M dataset. Based on this motion GAN model, the robot demonstrates its ability to replay the predicted motion in a human-like manner when interacting with a person.

BibTeX

@conference{gui-2018-110272,
author = {Liang-Yan Gui and Kevin Zhang and Yuxiong Wang and Xiaodan Liang and Jose M. F. Moura and Manuela Veloso},
title = {Teaching Robots to Predict Human Motion},
booktitle = {Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems},
year = {2018},
month = {October},
pages = {562 - 567},
}