A Single View Perspective Prior for Efficient Detection - Robotics Institute Carnegie Mellon University

A Single View Perspective Prior for Efficient Detection

Master's Thesis, Tech. Report, CMU-RI-TR-23-42, August, 2023

Abstract

Real-time efficient perception is critical for autonomous navigation and city scale sensing. Orthogonal to architectural improvements, streaming perception approaches have exploited adaptive sampling improving real-time detection performance. In this work, we propose a learnable geometry-guided prior that incorporates rough geometry of the 3D scene (a ground plane and a plane above) to resample images for efficient object detection. This significantly improves small and far-away object detection performance while also being more efficient both in terms of latency and memory. For autonomous navigation, using the same detector and scale, our approach improves detection rate by +39% and in real-time performance by +63% for small objects over state-of-the-art (SOTA). For fixed traffic cameras, our approach detects small objects at image scales other methods cannot. At the same scale, our approach improves detection of small objects by 195% over naive-downsampling and 63% over SOTA.

BibTeX

@mastersthesis{Ghosh-2023-137572,
author = {Anurag Ghosh},
title = {A Single View Perspective Prior for Efficient Detection},
year = {2023},
month = {August},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-23-42},
keywords = {computer vision, perspective geometry, efficient object detection, edge inference},
}