Learning Preference Models for Autonomous Mobile Robots in Complex Domains - Robotics Institute Carnegie Mellon University

Learning Preference Models for Autonomous Mobile Robots in Complex Domains

PhD Thesis, Tech. Report, CMU-RI-TR-10-41, Robotics Institute, Carnegie Mellon University, December, 2010

Abstract

Achieving robust and reliable autonomous operation even in complex unstructured environments is a central goal of field robotics. As the environments and scenarios to which robots are applied have continued to grow in complexity, so has the challenge of properly defining preferences and tradeoffs between various actions and the terrains they result in traversing. These definitions and parameters encode the desired behavior of the robot; therefore their correctness is of the utmost importance. Current manual approaches to creating and adjusting these preference models and cost functions have proven to be incredibly tedious and time-consuming, while typically not producing optimal results except in the simplest of circumstances. This thesis presents the development and application of machine learning techniques that automate the construction and tuning of preference models within complex mobile robotic systems. Utilizing the framework of inverse optimal control, expert examples of robot behavior can be used to construct models that generalize demonstrated preferences and reproduce similar behavior. Novel learning from demonstration approaches are developed that offer the possibility of significantly reducing the amount of human interaction necessary to tune a system, while also improving its final performance. Techniques to account for the inevitability of noisy and imperfect demonstration are presented, along with additional methods for improving the efficiency of expert demonstration and feedback. The effectiveness of these approaches is confirmed through application to several real world domains, such as the interpretation of static and dynamic perceptual data in unstructured environments and the learning of human driving styles and maneuver preferences. Extensive testing and experimentation both in simulation and in the field with multiple mobile robotic systems provides empirical confirmation of superior autonomous performance, with less expert interaction and no hand tuning. These experiments validate the potential applicability of the developed algorithms to a large variety of future mobile robotic systems.

BibTeX

@phdthesis{Silver-2010-10588,
author = {David Silver},
title = {Learning Preference Models for Autonomous Mobile Robots in Complex Domains},
year = {2010},
month = {December},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-10-41},
keywords = {Mobile Robots, Field Robotics, Learning from Demonstration, Imitation Learning, Inverse Optimal Control, Active Learning, Preference Models,Cost Functions, Parameter Tuning},
}