Towards Dexterous Robotic Manipulation by Imitating Experts
Abstract
Imitation learning offers a scalable path for transferring complex manipulation skills from expert demonstrators to robots. However, its success hinges on capturing highquality demonstrations and effectively transferring them to robot policies, especially in contact-rich or dynamically changing environments. This thesis explores how imitation learning, when paired with teleoperation and classical solvers can be used to teach robots dexterous manipulation skills across a range of real-world scenarios.
We first present BiDex, a bimanual teleoperation system for collecting rich demonstrations of human dexterity. By learning from this data via behavior cloning, we enable visuomotor policies that generalize to tasks such as hand-offs, tool use, and fine-grained object manipulation. Building on this, we introduce FACTR, a teleoperation system enhanced with force feedback, enabling contact-rich behaviors with out-of-distribution generalization. We show that imitation learning in this setting benefits from access to both force and visual modalities, leading to policies that are compliant and robust.
Beyond human experts, we demonstrate that imitation learning is also effective in learning from classical methods as experts. We show that by combining behavior cloning with teacher-student fine-tuning, we enable low-latency motion generation in cluttered or dynamic environments. Finally, we explore how to leverage 3D scene representations with Gaussian Splatting to improve view robustness of the behavior cloning policies. Together, these results showcase how imitation learning unlocks dexterous robotic manipulation skills when scaffolded with the right human interfaces, training strategies, and perception tools.
BibTeX
@mastersthesis{Li-2025-146611,author = {Yulong Li},
title = {Towards Dexterous Robotic Manipulation by Imitating Experts},
year = {2025},
month = {May},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-25-29},
keywords = {imitation learning, manipulation},
}