Learning to Drive using Waypoints - Robotics Institute Carnegie Mellon University

Learning to Drive using Waypoints

Tanmay Agarwal, Hitesh Arora, Tanvir Parhar, Shubhankar Deshpande, and Jeff Schneider
Workshop Paper, NeurIPS '19 Machine Learning for Autonomous Driving Workshop, December, 2019

Abstract

Traditional autonomous vehicle pipelines are highly modularized with different subsystems for localization, perception, actor prediction, planning, and control. Though this approach provides ease of interpretation, its generalizability to unseen environments is limited and hand-engineering of numerous parameters is required, especially in the prediction and planning systems. Recently, Deep Reinforcement Learning (DRL) has been shown to learn complex strategic games and perform challenging robotic tasks, which provides an appealing framework for learning to drive. In this paper, we propose an architecture that learns directly from semantically segmented images along with waypoint features to drive within CARLA simulator using the Proximal Policy Optimization (PPO) algorithm. We report significant improvement in performance on the benchmark tasks of driving straight, one turn and navigation with and without dynamic actors.

BibTeX

@workshop{Agarwal-2019-123163,
author = {Tanmay Agarwal and Hitesh Arora and Tanvir Parhar and Shubhankar Deshpande and Jeff Schneider},
title = {Learning to Drive using Waypoints},
booktitle = {Proceedings of NeurIPS '19 Machine Learning for Autonomous Driving Workshop},
year = {2019},
month = {December},
}