Learning to Collaborate from Simulation for Robot-assisted Dressing - Robotics Institute Carnegie Mellon University

Learning to Collaborate from Simulation for Robot-assisted Dressing

Alexander Clegg, Zackory Erickson, Patrick Grady, Greg Turk, Charles C. Kemp, and C. Karen Liu
Journal Article, IEEE Robotics and Automation Letters, Vol. 5, No. 2, pp. 2746 - 2753, April, 2020

Abstract

We investigated the application of haptic feedback control and deep reinforcement learning (DRL) to robot-assisted dressing. Our method uses DRL to simultaneously train human and robot control policies as separate neural networks using physics simulations. In addition, we modeled variations in human impairments relevant to dressing, including unilateral muscle weakness, involuntary arm motion, and limited range of motion. Our approach resulted in control policies that successfully collaborate in a variety of simulated dressing tasks involving a hospital gown and a T-shirt. In addition, our approach resulted in policies trained in simulation that enabled a real PR2 robot to dress the arm of a humanoid robot with a hospital gown. We found that training policies for specific impairments dramatically improved performance; that controller execution speed could be scaled after training to reduce the robot's speed without steep reductions in performance; that curriculum learning could be used to lower applied forces; and that multi-modal sensing, including a simulated capacitive sensor, improved performance.

BibTeX

@article{Clegg-2020-127563,
author = {Alexander Clegg and Zackory Erickson and Patrick Grady and Greg Turk and Charles C. Kemp and C. Karen Liu},
title = {Learning to Collaborate from Simulation for Robot-assisted Dressing},
journal = {IEEE Robotics and Automation Letters},
year = {2020},
month = {April},
volume = {5},
number = {2},
pages = {2746 - 2753},
}