Learning Visuomotor Policies for Aerial Navigation Using Cross-Modal Representations - Robotics Institute Carnegie Mellon University

Learning Visuomotor Policies for Aerial Navigation Using Cross-Modal Representations

Rogerio Bonatti, Ratnesh Madaan, Vibhav Vineet, Sebastian Scherer, and Ashish Kapoor
Conference Paper, Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1637 - 1644, October, 2020

Abstract

Machines are a long way from robustly solving open-world perception-control tasks, such as first-person view (FPV) aerial navigation. While recent advances in end-to- end Machine Learning, especially Imitation Learning and Reinforcement appear promising, they are constrained by the need of large amounts of difficult-to-collect labeled real- world data. Simulated data, on the other hand, is easy to generate, but generally does not render safe behaviors in diverse real-life scenarios. In this work we propose a novel method for learning robust visuomotor policies for real-world deployment which can be trained purely with simulated data. We develop rich state representations that combine supervised and unsupervised environment data. Our approach takes a cross-modal perspective, where separate modalities correspond to the raw camera data and the system states relevant to the task, such as the relative pose of gates to the drone in the case of drone racing. We feed both data modalities into a novel factored architecture, which learns a joint lowdimensional embedding via Variational Auto Encoders. This compact representation is then fed into a control policy, which we trained using imitation learning with expert trajectories in a simulator. We analyze the rich latent spaces learned with our proposed representations, and show that the use of our cross-modal architecture significantly improves control policy performance as compared to end-to-end learning or purely unsupervised feature extractors. We also present real-world results for drone navigation through gates in different track configurations and environmental conditions. Our proposed method, which runs fully onboard, can successfully generalize the learned representations and policies across simulation and reality, significantly outperforming baseline approaches.

BibTeX

@conference{Bonatti-2020-126645,
author = {Rogerio Bonatti and Ratnesh Madaan and Vibhav Vineet and Sebastian Scherer and Ashish Kapoor},
title = {Learning Visuomotor Policies for Aerial Navigation Using Cross-Modal Representations},
booktitle = {Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems},
year = {2020},
month = {October},
pages = {1637 - 1644},
}