Driving in Dense Traffic with Model-Free Reinforcement Learning - Robotics Institute Carnegie Mellon University

Driving in Dense Traffic with Model-Free Reinforcement Learning

Dhruv Mauria Saxena, Sangjae Bae, Alireza Nakhaei, Kikuo Fujimura, and Maxim Likhachev
Conference Paper, Proceedings of (ICRA) International Conference on Robotics and Automation, pp. 5385 - 5392, May, 2020

Abstract

Traditional planning and control methods could fail to find a feasible trajectory for an autonomous vehicle to execute amongst dense traffic on roads. This is because the obstacle-free volume in spacetime is very small in these scenarios for the vehicle to drive through. However, that does not mean the task is infeasible since human drivers are known to be able to drive amongst dense traffic by leveraging the cooperativeness of other drivers to open a gap. The traditional methods fail to take into account the fact that the actions taken by an agent affect the behaviour of other vehicles on the road. In this work, we rely on the ability of deep reinforcement learning to implicitly model such interactions and learn a continuous control policy over the action space of an autonomous vehicle. The application we consider requires our agent to negotiate and open a gap in the road in order to successfully merge or change lanes. Our policy learns to repeatedly probe into the target road lane while trying to find a safe spot to move in to. We compare against two model-predictive control-based algorithms and show that our policy outperforms them in simulation. As part of this work, we introduce a benchmark for driving in dense traffic for use by the community.

BibTeX

@conference{Saxena-2020-125507,
author = {Dhruv Mauria Saxena and Sangjae Bae and Alireza Nakhaei and Kikuo Fujimura and Maxim Likhachev},
title = {Driving in Dense Traffic with Model-Free Reinforcement Learning},
booktitle = {Proceedings of (ICRA) International Conference on Robotics and Automation},
year = {2020},
month = {May},
pages = {5385 - 5392},
}