09/22/2025    Mallory Lindahl

Andrea Bajcsy, an assistant professor at the Robotics Institute in Carnegie Mellon University’s School of Computer Science, has earned a DARPA Young Faculty Award (YFA) for her work to create models and algorithms that help embodied AI systems make more reliable decisions in diverse environments. 

The YFA program provides funding to promising junior faculty at U.S. academic and nonprofit institutions to support research related to national security needs. 

Embodied AI agents operating in the world can include ground vehicles that use vision-based navigation and reasoning systems; intelligent robotic manipulation systems that must safely interact with sensitive materials; and, more broadly, AI agents that interact with digital environments. These agents face two related but complementary safety challenges. 

First, they must understand how to prevent nuanced physical safety hazards caused by their own actions.

“Safe control techniques currently enable agents to prevent long-term safety hazards, but they are mostly limited to simple safety specifications like collision,” Bajcsy said. “We want them to understand how to avoid more nuanced safety hazards of the open world, like spilling hazardous chemicals, driving over dangerous debris or breaking delicate objects.” 

AI agents must also understand how to act in scenarios that fall outside their training data.

“Imagine a robot that encounters an unknown scenario, like entering a dark room when it has only been trained on light rooms,” Bajcsy said. “Many current methods can detect that something is unusual, but they don’t come with mitigation strategies. This conundrum leaves AI agents knowing that they don’t know what to do, but unable to generate appropriate recovery actions.” 

Bajcsy’s YFA-funded project, “Unifying Uncertainty and Safety for Embodied AI Agents,” uses a three-tiered approach to mitigate these challenges. She and her colleagues will work to understand uncertainties in AI agents’ internal representations of the world, quantify the robustness and reliability of these representations, and develop frameworks for helping agents generate appropriate actions under physical and distributional safety hazards. Improved performance could include steering away from safety hazards or asking a supervisor for assistance with a task. 

“The YFA grant will provide critical support to my lab as we enable agents to use their learned knowledge in familiar situations while automatically falling back to safer strategies when entering new situations,” Bajcsy said. “This work will contribute to the long-term mission of building robust embodied AI agents that humans can rely on in the real world.”

For More Information: Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu