We formalize robots that support people in the making of things from high-level goals in the real world as a new field, Generative Robotics. We introduce an approach for supporting 2D visual art-making with paintings and drawings along with 3D clay sculpture from a fixed perspective. Because there are no robotic datasets for collaborative painting and sculpting, we designed our approach to learn from small, self-generated datasets to learn real-world constraints and support collaborative interactions. Our approach uses (1) Real2Sim2Real to enable a robot to teach itself about physical constraints (e.g., type of paint and brush), (2) semantic planning to plan from high-level, abstract goals under severe real-world constraints (e.g., making a painting from a detailed photograph with only 4 colors and 64 brush strokes), and (3) self-supervised learning to generate data to train the robot to support creation rather than automate it. Our approach collaboratively creates paintings in heavily constrained settings. Lastly, we generalize our approach to new materials, tools, action representations, and state representations to perform long-horizon clay sculpting.
Thesis Committee Members:
Jean Oh, Chair
James McCann
Manuela Veloso
Ken Goldberg, UC Berkeley
