Generative Robotics: Self-Supervised Learning for Human-Robot Collaborative Creation - Robotics Institute Carnegie Mellon University
Loading Events

PhD Thesis Defense

October

10
Fri
Peter Schaldenbrand Postdoctoral Fellow Robotics Institute,
Carnegie Mellon University
Friday, October 10
2:30 pm to 4:30 pm
Newell-Simon Hall 4305
Generative Robotics: Self-Supervised Learning for Human-Robot Collaborative Creation
Abstract:
Robotic automation is generally welcomed for tasks that are dirty, dull, or dangerous, but with expanding robotic capabilities, robots are entering domains that are safe and enjoyable, such as creative industries. Although there is a widespread rejection of automation in creative fields, many people, from amateurs to professionals, would welcome supportive or collaborative creative tools. Supporting creative tasks is challenging with real-world robotics because there are limited relevant datasets, creative tasks are abstract and high-level, and real-world tools and materials are difficult to model and predict. Learning-based robotic intelligence is a promising method for creative support tools, but since the task is so complex, common approaches such as learning from demonstration would require too many samples and reinforcement learning may never converge. In this thesis, we show that robots can learn to support acts of creativity purely through a few, proposed self-supervised learning techniques.

We formalize robots that support people in the making of things from high-level goals in the real world as a new field, Generative Robotics. We introduce an approach for supporting 2D visual art-making with paintings and drawings along with 3D clay sculpture from a fixed perspective. Because there are no robotic datasets for collaborative painting and sculpting, we designed our approach to learn from small, self-generated datasets to learn real-world constraints and support collaborative interactions. Our approach uses (1) Real2Sim2Real to enable a robot to teach itself about physical constraints (e.g., type of paint and brush), (2) semantic planning to plan from high-level, abstract goals under severe real-world constraints (e.g., making a painting from a detailed photograph with only 4 colors and 64 brush strokes), and (3) self-supervised learning to generate data to train the robot to support creation rather than automate it. Our approach collaboratively creates paintings in heavily constrained settings. Lastly, we generalize our approach to new materials, tools, action representations, and state representations to perform long-horizon clay sculpting.

Document:


Thesis Committee Members:

Jean Oh, Chair

James McCann

Manuela Veloso

Ken Goldberg, UC Berkeley