Learning Distributional Models for Relative Placement in Robotics - Robotics Institute Carnegie Mellon University

Learning Distributional Models for Relative Placement in Robotics

Master's Thesis, Tech. Report, CMU-RI-TR-24-24, May, 2024

Abstract

Relative placement tasks are an important category of tasks in which one object needs to be placed in a desired pose relative to another object. Previous work has shown success in learning relative placement tasks from just a small number of demonstrations when using relational reasoning networks with geometric inductive biases. However, such methods cannot flexibly represent multimodal tasks, like a mug hanging on any of n racks. We propose a method that incorporates additional properties that enable learning multimodal relative placement solutions, while retaining the provably translation-invariant and relational properties of prior work. We show that our method is able to learn precise relative placement tasks with only 10-20 multimodal demonstrations with no human annotations across a diverse set of objects within a category.

BibTeX

@mastersthesis{Wang-2024-140605,
author = {Jenny Wang},
title = {Learning Distributional Models for Relative Placement in Robotics},
year = {2024},
month = {May},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-24-24},
keywords = {manipulation, imitation learning, robotics, multimodal},
}