Simulating emergent properties of human driving behavior using multi-agent RAIL - Robotics Institute Carnegie Mellon University

Simulating emergent properties of human driving behavior using multi-agent RAIL

Raunak P. Bhattacharyya, Derek J. Phillips, Changliu Liu, Jayesh K. Gupta, Katherine Driggs-Campbell, and Mykel J. Kochenderfer
Conference Paper, Proceedings of (ICRA) International Conference on Robotics and Automation, pp. 789 - 795, May, 2019

Abstract

Recent developments in multi-agent imitation learning have shown promising results for modeling the behavior of human drivers. However, it is challenging to capture emergent traffic behaviors that are observed in real-world datasets. Such behaviors arise due to the many local interactions between agents that are not commonly accounted for in imitation learning. This paper proposes Reward Augmented Imitation Learning (RAIL), which integrates reward augmentation into the multi-agent imitation learning framework and allows the designer to specify prior knowledge in a principled fashion. We prove that convergence guarantees for the imitation learning process are preserved under the application of reward augmentation. This method is validated in a driving scenario, where an entire traffic scene is controlled by driving policies learned using our proposed algorithm. Further, we demonstrate improved performance in comparison to traditional imitation learning algorithms both in terms of the local actions of a single agent and the behavior of emergent properties in complex, multi-agent settings.

BibTeX

@conference{Bhattacharyya-2019-113167,
author = {Raunak P. Bhattacharyya and Derek J. Phillips and Changliu Liu and Jayesh K. Gupta and Katherine Driggs-Campbell and Mykel J. Kochenderfer},
title = {Simulating emergent properties of human driving behavior using multi-agent RAIL},
booktitle = {Proceedings of (ICRA) International Conference on Robotics and Automation},
year = {2019},
month = {May},
pages = {789 - 795},
}