Efficient, Multi-Fidelity Perceptual Representations via Hierarchical Gaussian Mixture Models - Robotics Institute Carnegie Mellon University

Efficient, Multi-Fidelity Perceptual Representations via Hierarchical Gaussian Mixture Models

Master's Thesis, Tech. Report, CMU-RI-TR-17-44, Robotics Institute, Carnegie Mellon University, August, 2017

Abstract

Operation of mobile autonomous systems in real-world environments and their participation in the accomplishment of meaningful tasks requires a high-fidelity perceptual representation that enables efficient inference. It is challenging to reason efficiently in the space of sensor observations primarily due to the dependence of measurements on the type of sensor, noise in measurements and in some cases, the prohibitive size of sensor data. A perceptual representation that abstracts out sensor nuances is thus required to enable effective and efficient reasoning in \textit{a priori} unknown environments. This thesis presents a probabilistic environment representation that allows efficient high-fidelity modeling and inference towards enabling informed planning (active perception) on a computationally constrained mobile autonomous system. A major challenge is the need for real-time generation and update of the model given the computational and memory limitations on a mobile robot. This constraint has generally resulted in a compromise on the fidelity of the model in existing literature in the mobile robot community.

To address this challenge, the proposed approach exploits the structure of real world environments and models dependencies between spatially distinct locations. Gaussian Mixture Models are employed to capture these structural dependencies and learn a semi-parametric continuous spatial model from the measurements of the environment. A hierarchy of these arbitrary resolution models enables a multi-fidelity representation with the variation in fidelity quantified via information-theoretic measures. Crucially for active perception, the spatial model is extended to a distribution over occupancy with a measure of uncertainty incorporated via a variance estimate associated with model predictions. The compact nature and representative capability of the proposed model coupled with a real-time embedded GPU-based implementation enables high-fidelity and memory-efficient modeling and inference as demonstrated on real-world datasets in diverse environments.

BibTeX

@mastersthesis{Srivastava-2017-27032,
author = {Shobhit Srivastava},
title = {Efficient, Multi-Fidelity Perceptual Representations via Hierarchical Gaussian Mixture Models},
year = {2017},
month = {August},
school = {Carnegie Mellon University},
address = {Pittsburgh PA},
number = {CMU-RI-TR-17-44},
keywords = {Machine Perception, 3D Perception, 3D Modeling, 3D Mapping, Active Perception, Multimodal modeling, Information Representation},
}