1:30 pm to 2:30 pm
Newell-Simon Hall 4305
Abstract:
Assistive robotic systems have the potential to significantly enhance autonomy and independence for individuals with physical disabilities. However, existing shared autonomy frameworks typically rely on static policies trained offline, which fail to adapt effectively when encountering unforeseen environmental variations or evolving user behaviors. Additionally, teleoperating high degrees-of-freedom (DoF) robotic manipulators through low-dimensional (low-DoF) user interfaces often necessitates frequent, disruptive manual mode switches, negatively impacting user experience and operational efficiency.
To comprehensively address these issues, we introduce two complementary frameworks: Incrementally Learned Shared Autonomy (ILSA) and LLM-Driven Automatic Mode Switching (LAMS). ILSA incrementally adapts assistive robotic policies during real-world deployment by leveraging user feedback through structured incremental learning, ensuring robustness and adaptability in dynamic scenarios. LAMS employs Large Language Models (LLMs) to intelligently automate mode switching, dynamically predicting optimal control mappings from natural language reasoning of task contexts and incremental user corrections, thereby substantially reducing manual interaction burdens.
We conduct thorough experimental evaluations and user studies involving assistive teleoperation tasks with a Kinova Gen3 robotic arm. Results demonstrate that ILSA significantly enhances policy adaptability and task success rates, while LAMS substantially reduces cognitive load and mode-switching frequency compared to traditional baselines. Together, these frameworks provide an integrated solution, advancing the usability, responsiveness, and practicality of assistive robotic teleoperation systems.
Committee:
Prof. Zackory Erickson (advisor)
Prof. David Held
Yufei Wang
