Active Perception using Light Curtains for Autonomous Driving - Robotics Institute Carnegie Mellon University

Active Perception using Light Curtains for Autonomous Driving

Conference Paper, Proceedings of (ECCV) European Conference on Computer Vision, pp. 751 - 766, August, 2020

Abstract

Most real-world 3D sensors such as LiDARs perform fixed scans of the entire environment, while being decoupled from the recognition system that processes the sensor data. In this work, we propose a method for 3D object recognition using light curtains, a resource-efficient controllable sensor that measures depth at user-specified locations in the environment. Crucially, we propose using prediction uncertainty of a deep learning based 3D point cloud detector to guide active perception. Given a neural network’s uncertainty, we develop a novel optimization algorithm to optimally place light curtains to maximize coverage of uncertain regions. Efficient optimization is achieved by encoding the physical constraints of the device into a constraint graph, which is optimized with dynamic programming. We show how a 3D detector can be trained to detect objects in a scene by sequentially placing uncertainty-guided light curtains to successively improve detection accuracy. Links to code can be found on the project webpage.

Notes
We thank Matthew O’Toole for feedback on the initial draft of this paper. This material is based upon work supported by the National Science Foundation under Grants No. IIS-1849154, IIS-1900821 and by the United States Air Force and DARPA under Contract No. FA8750-18-C-0092.

BibTeX

@conference{Ancha-2020-126672,
author = {Siddharth Ancha and Yaadhav Raaj and Peiyun Hu and Srinivasa G. Narasimhan and David Held},
title = {Active Perception using Light Curtains for Autonomous Driving},
booktitle = {Proceedings of (ECCV) European Conference on Computer Vision},
year = {2020},
month = {August},
pages = {751 - 766},
}