Learning in Human-Robot Teams
Assistant Professor, Computer Science
Mauldin Auditorium (NSH 1305 )
Time: 3:30 to 4:30 pm
A principal goal of robotics is to realize embodied systems that are effective collaborators for human endeavors in the physical world. Human-robot collaborations can occur in a variety of forms, including autonomous robotic assistants, mixed-initiative robot explorers, and augmentations of the human body. For these collaborations to be effective, human users must have the ability to realize their intended behavior into actual robot control policies. At run-time, robots should be able to ``manipulate'' an environment and engage in two-way communication in a manner suitable to their human users. Further, the tools for programming, communicating with, and manipulating using robots should be accessible to the diverse sets of technical abilities present in society. Although robots face greater degrees of uncertainty, it is crucial to provide robotic ``authoring'' tools analogous to those for societally ubiquitous applications that manipulate purely digital information, such as web authoring, interactive virtual worlds, and productivity software.
Towards the goal of effective human-robot collaboration, our research has pursued the use of learning and data-driven approaches to robot programming, communication, and manipulation. Learning from demonstration (LfD) has emerged as a central theme of our efforts towards natural instruct of autonomous robots by human users. In robot LfD, the desired robot control policy is implicit in human demonstration rather than explicitly coded in a computer program.
In this talk, I will describe our LfD-based work in policy learning using Gaussian Process Regression and humanoid imitation learning through spatio-temporal dimension reduction. This work is supported by our efforts in markerless and inertial-based human kinematic tracking, notably our indoor-outdoor person following system developed in collaboration with iRobot Research. I will argue that collaboration in human-robot teams can be modeled by Markov Random Fields (MRFs), allowing for unification of existing multi-robot algorithms and application of belief propagation. Time permitting, I will also discuss our work learning tactile and force signatures to distinguish successful versus unsuccessful grasping on the NASA Robonaut.
Odest Chadwicke Jenkins, Ph.D., is an Assistant Professor of Computer Science at Brown University. Prof. Jenkins earned his B.S. in Computer Science and Mathematics at Alma College (1996), M.S. in Computer Science at Georgia Tech (1998), and Ph.D. in Computer Science at the University of Southern California (2003). In 2007, he received Young Investigator funding from the Office of Naval Research and the Presidential Early Career Award for Scientists and Engineers for his work in learning primitive models of human motion for humanoid robot control and kinematic tracking.
For appointments, please contact Manuela Veloso(email@example.com)