/Efficient, Multi-Fidelity Perceptual Representations via Hierarchical Gaussian Mixture Models

Efficient, Multi-Fidelity Perceptual Representations via Hierarchical Gaussian Mixture Models

Shobhit Srivastava
Master's Thesis, Tech. Report, CMU-RI-TR-17-44, Robotics Institute, Carnegie Mellon University, August, 2017

Download Publication (PDF)

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract

Operation of mobile autonomous systems in real-world environments and their participation in the accomplishment of meaningful tasks requires a high-fidelity perceptual representation that enables efficient inference. It is challenging to reason efficiently in the space of sensor observations primarily due to the dependence of measurements on the type of sensor, noise in measurements and in some cases, the prohibitive size of sensor data. A perceptual representation that abstracts out sensor nuances is thus required to enable effective and efficient reasoning in \textit{a priori} unknown environments. This thesis presents a probabilistic environment representation that allows efficient high-fidelity modeling and inference towards enabling informed planning (active perception) on a computationally constrained mobile autonomous system. A major challenge is the need for real-time generation and update of the model given the computational and memory limitations on a mobile robot. This constraint has generally resulted in a compromise on the fidelity of the model in existing literature in the mobile robot community.

To address this challenge, the proposed approach exploits the structure of real world environments and models dependencies between spatially distinct locations. Gaussian Mixture Models are employed to capture these structural dependencies and learn a semi-parametric continuous spatial model from the measurements of the environment. A hierarchy of these arbitrary resolution models enables a multi-fidelity representation with the variation in fidelity quantified via information-theoretic measures. Crucially for active perception, the spatial model is extended to a distribution over occupancy with a measure of uncertainty incorporated via a variance estimate associated with model predictions. The compact nature and representative capability of the proposed model coupled with a real-time embedded GPU-based implementation enables high-fidelity and memory-efficient modeling and inference as demonstrated on real-world datasets in diverse environments.

BibTeX Reference
@mastersthesis{Srivastava-2017-27032,
author = {Shobhit Srivastava},
title = {Efficient, Multi-Fidelity Perceptual Representations via Hierarchical Gaussian Mixture Models},
year = {2017},
month = {August},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-17-44},
keywords = {Machine Perception, 3D Perception, 3D Modeling, 3D Mapping, Active Perception, Multimodal modeling, Information Representation},
}
2017-09-13T10:38:02+00:00