Loading Events

MSR Speaking Qualifier

August

12
Thu
Siddharth Agrawal Robotics Institute,
Carnegie Mellon University
Thursday, August 12
1:00 pm to 2:00 pm
MSR Thesis Talk: Siddharth Agrawal

Title: Learning to Imitate, Adapt and Communicate

Abstract:
For AI agents to co-exist with humans, they need to be able to learn from us humans, adapt to any perceived changes in our behavior, and communicate in a manner that is easily interpretable. In this work, we investigate the following 3 subproblems: Imitation Learning, Adaptation in Human-Agent Teaming, and Learning Interpretable Communication. First, we study Generative Adversarial Imitation Learning, a state-of-the-art imitation learning approach. We discuss its limitations theoretically as well as empirically. Particularly, we look at the reward bias issue in GAIL and propose neutral rewards for overcoming the reward bias problem. We demonstrate how our method outperforms the existing approaches.

Then, we study the problem of adaptation in the context of Team-Space-Fortress, a 2 player game. We design an agent library, which tries to capture the space of human policies and create a set of similarity metrics that could evaluate the similarity of human policies to policies in the agent library. We leverage this similarity metric to adapt the policy of the AI agent, partnered with a human, to improve overall team performance.

For efficient human-agent teaming it is also crucial that agents communicate in a manner that is interpretable to humans. Humans compose a finite vocabulary of words to communicate.  Further, they also communicate at a rate that another human could understand. Therefore, for AI agents to be good partners to humans, they need to be able to learn communication protocols that use discrete messages instead of continuous vectors and they need to learn when to communicate and at a rate which humans would be able to understand i.e the communication needs to be sparse.  In this work, we first identify the shortcomings of existing approaches for learning sparse communication protocols using multi-agent reinforcement learning. Then we propose a method capable of learning communication protocols that are sparse and use discrete tokens.

Committee:
Katia Sycara (advisor)
Jean Oh
Wenhao Luo

 

Zoom Link: https://cmu.zoom.us/j/98785850729?pwd=V2xuRmh0WkdiSWV3VHEvK05hY1R2QT09

Passcode: 511870