Distributed Reinforcement Learning for Autonomous Driving - Robotics Institute Carnegie Mellon University

Distributed Reinforcement Learning for Autonomous Driving

Master's Thesis, Tech. Report, CMU-RI-TR-22-09, Robotics Institute, Carnegie Mellon University, May, 2022

Abstract

Due to the complex and safety-critical nature of autonomous driving, recent works typically test their ideas on simulators designed for the very purpose of advancing self-driving research. Despite the convenience of modeling autonomous driving as a trajectory optimization problem, few of these methods resort to online reinforcement learning (RL) to address challenging driving scenarios. This is mainly because classic online RL algorithms are originally designed for toy problems such as Atari games, which are solvable within hours. In contrast, it may take weeks or months to get satisfactory results on self-driving tasks using these online RL methods as a consequence of the time-consuming simulation and the difficulty of the problem itself. Thus, a promising online RL pipeline for autonomous driving should be efficiency driven.

In this thesis, we investigate the inefficiency of directly applying generic single-agent or distributed RL algorithms to CARLA self-driving pipelines due to the expensive simulation cost. We propose two asynchronous distributed RL methods, Multi-Parallel SAC (off-policy) and MultiParallel PPO (on-policy), dedicated to accelerating the online RL training on the CARLA simulator via a specialized distributed framework that establishes both inter-process and intra-process parallelization. We demonstrate that our distributed multi-agent RL algorithms achieve state-of-the-art performances on various CARLA self-driving tasks in much shorter and reasonable time.

BibTeX

@mastersthesis{Huang-2022-131708,
author = {Zhe Huang},
title = {Distributed Reinforcement Learning for Autonomous Driving},
year = {2022},
month = {May},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-22-09},
}