Learning Preference Models for Autonomous Mobile Robots in Complex Domains

David Silver
doctoral dissertation, tech. report CMU-RI-TR-10-41, Robotics Institute, Carnegie Mellon University, December, 2010


Download
  • Adobe portable document format (pdf) (20MB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
Achieving robust and reliable autonomous operation even in complex unstructured environments is a central goal of field robotics. As the environments and scenarios to which robots are applied have continued to grow in complexity, so has the challenge of properly defining preferences and tradeoffs between various actions and the terrains they result in traversing. These definitions and parameters encode the desired behavior of the robot; therefore their correctness is of the utmost importance. Current manual approaches to creating and adjusting these preference models and cost functions have proven to be incredibly tedious and time-consuming, while typically not producing optimal results except in the simplest of circumstances. This thesis presents the development and application of machine learning techniques that automate the construction and tuning of preference models within complex mobile robotic systems. Utilizing the framework of inverse optimal control, expert examples of robot behavior can be used to construct models that generalize demonstrated preferences and reproduce similar behavior. Novel learning from demonstration approaches are developed that offer the possibility of significantly reducing the amount of human interaction necessary to tune a system, while also improving its final performance. Techniques to account for the inevitability of noisy and imperfect demonstration are presented, along with additional methods for improving the efficiency of expert demonstration and feedback. The effectiveness of these approaches is confirmed through application to several real world domains, such as the interpretation of static and dynamic perceptual data in unstructured environments and the learning of human driving styles and maneuver preferences. Extensive testing and experimentation both in simulation and in the field with multiple mobile robotic systems provides empirical confirmation of superior autonomous performance, with less expert interaction and no hand tuning. These experiments validate the potential applicability of the developed algorithms to a large variety of future mobile robotic systems.

Keywords
Mobile Robots, Field Robotics, Learning from Demonstration, Imitation Learning, Inverse Optimal Control, Active Learning, Preference Models,Cost Functions, Parameter Tuning

Notes
Associated Center(s) / Consortia: National Robotics Engineering Center and Field Robotics Center

Text Reference
David Silver, "Learning Preference Models for Autonomous Mobile Robots in Complex Domains," doctoral dissertation, tech. report CMU-RI-TR-10-41, Robotics Institute, Carnegie Mellon University, December, 2010

BibTeX Reference
@phdthesis{Silver_2010_6773,
   author = "David Silver",
   title = "Learning Preference Models for Autonomous Mobile Robots in Complex Domains",
   booktitle = "",
   school = "Robotics Institute, Carnegie Mellon University",
   month = "December",
   year = "2010",
   number= "CMU-RI-TR-10-41",
   address= "Pittsburgh, PA",
}