06/20/2025    Mallory Lindahl

Every day, the U.S. Federal Aviation Administration (FAA) handles tens of thousands of flights. Airspace traffic not only involves commercial planes, but helicopters, experimental lightcraft, freight carriers and an increasing number of uncrewed aerial systems (UAS). As air traffic density increases, so does the need for highly precise collision avoidance and traffic management systems to ensure safety and efficiency in flight.  

Current systems such as autonomous collision avoidance systems (ACS) and unmanned traffic management (UTM) frameworks have shown some effectiveness in mitigating in-air collisions. However, their systems depend on multiple heavy sensor modalities, making them less adaptable for smaller aircraft.  

Recently graduated Robotics Institute (RI) Ph.D. student Jay Patrikar explained the need for a different approach: “Visual detect-and-avoid (DAA) is a critical capability for safe flights in shared airspace. Most of the current burden falls on the pilot, who must visually identify and steer clear of nearby aircraft. This task is already difficult, and becomes even more challenging for small uncrewed aerial systems, which lack onboard pilots and often operate without the heavy sensors used on manned aircraft.”

Building on this understanding, a team from the AirLab at Carnegie Mellon University’s Robotics Institute created ViSafe, a vision-only airborne collision avoidance system to equip UAS with “see-and-avoid” capabilities.

Members of the team pose with their aircraft used for testing ViSafe capabilities.

The vision-based system allows ViSafe to be passive, infrastructure-free, and generalizable to a wide range of aircraft. It builds on their previous work, AirTrack, which uses high-resolution detection and tracking networks to detect aerial objects. ViSafe expands the initial AirTrack technology to multi-camera settings and uses control barrier functions (CBFs) to guarantee safety. 

“Control Barrier Functions keep systems safe in real time by gently adjusting actions only when needed– ensuring they stay within safe limits without being overly conservative,” said Parv Kapoor, Ph.D. student in the Software and Societal Systems Department at CMU. “The fast, minimal adjustments help systems stay close to the original control.” 

To bridge the gap between theoretical formulations and real-world behavior, the researchers prepared extensive testing for ViSafe, which included developing a “digital twin” of the ViSafe system using the NVIDIA Isaac Sim. “The simulation environment allowed comprehensive testing of ViSafe’s perception abilities across diverse scenarios,” said Ian Higgins, a RI Engineer & Master’s student.

Following successful simulation testing, they moved ViSafe to the field to see if the system held up under real-world conditions. Over the course of approximately 80 hours, the team used the simulation testing results to create representative real-world configurations to analyse the collision avoidance capabilities. 

“It’s one thing to show results in a clean, controlled lab setting—but demonstrating consistent performance in complex, unstructured real-world conditions requires strict process control and a tremendous amount of flight testing,” said Patrikar.

After validating ViSafe’s real-world performance through high-fidelity simulation, the team found that their system significantly reduces collision risk, even when operating under the strict Size, Weight, Power, and Cost (SWaP-C) constraints typical of small uncrewed aircraft. These constraints make it impractical to rely on bulky, power-intensive sensors. ViSafe’s vision-only approach offers a lightweight, passive alternative that maintains strong safety performance, enabling broader deployment on agile, resource-constrained platforms. 

“At closing speeds above 140 kilometers per hour, ViSafe must spot a 10 pixel speck half a kilometer away, crunch 4K video onboard in real time, and steer clear of obstacles in less than ten seconds,” said RI Ph.D. student Nikhil Keetha. “These numbers show just how precise ViSafe’s capabilities are.” 

The team will present ViSafe at the 2025 Robotics: Science and Systems (RSS) conference in June, one of the largest international conferences highlighting impressive new work in robotics and autonomous systems. They will present on the technical aspects of ViSafe as well as the crucial real-world applications of the system. 

“With airspace growing more crowded, ViSafe shows how AI-driven safety augmentation can enable scalable, certifiable autonomy in the national airspace, and marks an early step toward deploying AI in safety-critical domains,” said Sebastian Scherer, associate research professor at RI. “Eventually, this will help unlock routine, trusted autonomy in shared skies.”

For More Information: Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu