Real-time Semantic Mapping for Autonomous Off-Road Navigation - Robotics Institute Carnegie Mellon University

Real-time Semantic Mapping for Autonomous Off-Road Navigation

Conference Paper, Proceedings of 11th International Conference on Field and Service Robotics (FSR '17), pp. 335 - 350, September, 2017

Abstract

In this paper we describe a semantic mapping system for autonomous off-road driving with an All-Terrain Vehicle (ATVs). The system's goal is to provide a richer representation of the environment than a purely geometric map, allowing it to distinguish, e.g. tall grass from obstacles. The system builds a 2.5D grid map encoding both geometric(terrain height) and semantic information (navigation-relevant classes such as trail, grass, etc.). The geometric and semantic information are estimated online and in real-time from LiDAR and image sensor data, respectively. Using this semantic map, motion planners can create semantically aware trajectories. To achieve robust and efficient semantic segmentation, we design a custom Convolutional Neural Network (CNN) and train it with a novel dataset of labeled off-road imagery built for this purpose. We evaluate our semantic segmentation offline, showing comparable performance to the state of the art with slightly lower latency. We also show closed-loop field results with an autonomous ATV driving over challenging off-road terrain by using the semantic map in conjunction with a simple path planner. Our models and labeled dataset will be publicly available.

BibTeX

@conference{Maturana-2017-102768,
author = {Daniel Maturana and Po-Wei Chou and Masashi Uenoyama and Sebastian Scherer},
title = {Real-time Semantic Mapping for Autonomous Off-Road Navigation},
booktitle = {Proceedings of 11th International Conference on Field and Service Robotics (FSR '17)},
year = {2017},
month = {September},
pages = {335 - 350},
keywords = {semantic mapping, semantic segmentation, autonmous vehicles, off-road, all-terrain vehicles},
}