Learning On-Road Visual Control for Self-Driving Cars with Auxiliary Tasks - Robotics Institute Carnegie Mellon University

Learning On-Road Visual Control for Self-Driving Cars with Auxiliary Tasks

Yilun Chen, Praveen Palanisamy, Pri Mudalige, Katharina Muelling, and John M. Dolan
Conference Paper, Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV '19), pp. 331 - 338, January, 2019

Abstract

Safe and robust on-road navigation system is a crucial component for achieving fully automated vehicles. [3] recently proposed an end-to-end algorithm that can directly learn steering commands from raw pixels of a front camera by using one convolutional neural network. In this paper, we leverage auxiliary information aside from raw images and design a novel network structure to help boost the driving performance, while maintaining the advantage of minimal training data and end-to-end training method. First, we incorporate human common sense into vehicle navigation by transferring features from image recognition tasks. Second, we apply image semantic segmentation as an auxiliary task for navigation. Third, we consider temporal information by introducing an LSTM module to the network. Finally, we combine vehicle kinematics with a sensor fusion step. We show our method can outperform the state-of-the-art visual navigation method both in the Udacity simulation environment and on the real-world comma.ai dataset. Our method also has faster training speed and more stable driving behavior compared to previous methods.

BibTeX

@conference{Chen-2019-118690,
author = {Yilun Chen and Praveen Palanisamy and Pri Mudalige and Katharina Muelling and John M. Dolan},
title = {Learning On-Road Visual Control for Self-Driving Cars with Auxiliary Tasks},
booktitle = {Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV '19)},
year = {2019},
month = {January},
pages = {331 - 338},
keywords = {self-driving, computer vision, machine learning, LSTM},
}