Multi-Armed Recommendation Bandits for Selecting State Machine Policies for Robotic Systems - Robotics Institute Carnegie Mellon University

Multi-Armed Recommendation Bandits for Selecting State Machine Policies for Robotic Systems

Conference Paper, Proceedings of (ICRA) International Conference on Robotics and Automation, pp. 4545 - 4551, May, 2013

Abstract

We investigate the problem of selecting a state-machine from a library to control a robot. We are particularly interested in this problem when evaluating such state machines on a particular robotics task is expensive. As a motivating example, we consider a problem where a simulated vacuuming robot must select a driving state machine well-suited for a particular (unknown) room layout. By borrowing concepts from collaborative filtering (recommender systems such as Netflix and Amazon.com), we present a multi-armed bandit formulation that incorporates recommendation techniques to efficiently select state machines for individual room layouts. We show that this formulation outperforms the individual approaches (recommendation, multi-armed bandits) as well as the baseline of selecting the "average best" state machine across all rooms.

BibTeX

@conference{Matikainen-2013-7695,
author = {Pyry K. Matikainen and Padraig Michael Furlong and Rahul Sukthankar and Martial Hebert},
title = {Multi-Armed Recommendation Bandits for Selecting State Machine Policies for Robotic Systems},
booktitle = {Proceedings of (ICRA) International Conference on Robotics and Automation},
year = {2013},
month = {May},
pages = {4545 - 4551},
}