Influence-Aware Safety for Human-Robot Interaction - Robotics Institute Carnegie Mellon University
Loading Events

PhD Thesis Defense

August

29
Fri
Ravi Pandya PhD Student Robotics Institute,
Carnegie Mellon University
Friday, August 29
9:00 am to 10:30 am
Newell-Simon Hall 3305
Influence-Aware Safety for Human-Robot Interaction
Abstract:
In recent years, we have seen how influential (and potentially harmful) algorithms can be in our lives through recommender systems and language models; sometimes creating polarization and conspiracies that lead to unsafe behavior. Now that robots are also growing more common in the real world, we must be very careful to ensure that AI-driven systems are aware of the influence they have on people, especially when it comes to real-world behavior. In this thesis, we focus on the problem of influence-aware safe control for human-robot interaction in hopes of enabling robots to intentionally and positively influence people to make their interactions with robots more safer and more efficient. We first study this problem from the safe control perspective by introducing a novel method for dealing with the multimodality of the robot’s uncertainty over a human’s intention inside a robust safe controller. Next, we explore different methods for generating influence-aware robot behavior from different levels of abstraction: action-directed, goal-directed, and strategy-directed. We ultimately find useful tools for designing robot behaviors that can proactively influence human collaborators towards positive outcomes. We then introduce a method to solve the influence-aware safe control problem that uses reach-avoid dynamic games to incorporate a prediction model of a human into the synthesis of a safe controller. This ultimately allows for more efficient interactions without sacrificing safety. Finally, we extend this problem formulation to more general interactions between a human and a language-based AI assistant. We demonstrate that safety-critical reinforcement learning can enable us to automatically learn guardrails for the AI assistant in order to steer the human towards safer outcomes. Ultimately, we hope that the work done in this thesis will help researchers in robotics (and beyond) to understand the importance of modeling the influence that AI-driven agents have on people and how to use this understanding to keep people safe.

Thesis Committee Members:

Changliu Liu, Co-chair

Andrea Bajcsy, Co-chair

Aditi Raghunathan

Guy Rosman, Toyota Research Institute