Inverse Reinforcement Learning for Autonomous Ground Navigation Using Aerial and Satellite Observation Data - Robotics Institute Carnegie Mellon University

Inverse Reinforcement Learning for Autonomous Ground Navigation Using Aerial and Satellite Observation Data

Master's Thesis, Tech. Report, CMU-RI-TR-19-24, Robotics Institute, Carnegie Mellon University, August, 2019

Abstract

Inverse reinforcement learning (IRL) is a supervised learning paradigm where a learner observes expert demonstrations to learn a hidden cost function in order to mimic the expert’s behavior. Eliminating the need of elaborate feature engineering, deep IRL approaches have been gaining interests in various problem domains including robot navigation. With the advent of low-cost drones and satellite services increasing the availability of 2D and 3D data over large area, there has been growing interest in end-to-end autonomous navigation systems which uses aerial and satellite information where prior knowledge of the environment is often outdated or no longer valid. In this paper, we propose a Conditional Multimodal Deep Inverse Reinforcement Learning approach that uses a deep neural network to learn sophisticated features for generating cost maps for multiple driving behaviors in the global planning context while utilizing both 2D and 3D data to accomplish such a task.

BibTeX

@mastersthesis{Song-2019-117058,
author = {Yeeho Song},
title = {Inverse Reinforcement Learning for Autonomous Ground Navigation Using Aerial and Satellite Observation Data},
year = {2019},
month = {August},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-19-24},
keywords = {nverse Reinforcement Learning, Learning from Demonstration, Autonomous Navigation, Conditional Learning, Multimodal Learning},
}