Abstract: Robot safety is a nuanced concept. We commonly equate safety with collision-avoidance, but in complex, real-world environments (i.e., the “open world”) it can be much more: for example, a mobile manipulator should understand when it is not confident about a requested task, that areas roped off by caution tape should never be breached, and that objects should be gently pulled from clutter to prevent falling. However, designing robots that have such a nuanced safety understanding—and can reliably generate appropriate actions—is an outstanding challenge.
Bio: Andrea Bajcsy is an Assistant Professor in the Robotics Institute at Carnegie Mellon University where she leads the Interactive and Trustworthy Robotics Lab (Intent Lab). She broadly works at the intersection of robotics, machine learning, control theory, and human-AI interaction. Prior to joining CMU, Andrea received her Ph.D. in Electrical Engineering & Computer Science from University of California, Berkeley in 2022. She is the recipient of the DARPA Young Faculty Award (2025), NSF CAREER Award (2025), Google Research Scholar Award (2024), Finalist for Best Paper Award of the IEEE RAS Technical Committee on Robot Control (2024), Rising Stars in EECS Award (2021), Honorable Mention for the T-RO Best Paper Award (2020), NSF Graduate Research Fellowship (2016), and previously worked at NVIDIA Research for Autonomous Driving.
