Generating Omni-Directional View of Neighboring Objects for Ensuring Safe Urban Driving - Robotics Institute Carnegie Mellon University

Generating Omni-Directional View of Neighboring Objects for Ensuring Safe Urban Driving

Tech. Report, CMU-RI-TR-14-11, Robotics Institute, Carnegie Mellon University, June, 2014

Abstract

To reliably execute urban driving maneuvers, it is critical for self-driving cars to timely obtain the locations of road occupants (e.g., cars, pedestrians, bicyclists, etc.). If such information is unreliably estimated, it would put a self-driving car in a great risk. To provide our self-driving car with such a capability, this paper presents a per- ception algorithm that generates, by combining scan points from multiple, automotive grade LIDARs, temporally consistent and spatially seamless snapshots of neighboring (dynamic and static) objects. The outputs of this algorithm are omni-directional views of neighboring (dynamic and static) objects to timely provide information regarding free-space. To this end, the proposed algorithm first represents a square region cen- tered at the current location of ego-vehicle and then traces, for each of the LIDAR scans, a virtual line segment between a LIDAR and the edge of reliable sensing range, to update cells on the line segment. Through the tests with several urban streets driving data, the proposed algorithm showed promising results in terms of clearly identifying traversable regions in the drivable regions while accurately updating objects’ occupan- cies.

BibTeX

@techreport{Seo-2014-7881,
author = {Young-Woo Seo},
title = {Generating Omni-Directional View of Neighboring Objects for Ensuring Safe Urban Driving},
year = {2014},
month = {June},
institute = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-14-11},
}