VASC Seminar: David Thompson
Orbiters, Landers, and Rovers: Computer Vision for Autonomous Space Exploration
Nasa Jet Propulsion Lab
October 29, 2012, 3pm-4pm, NSH 1305
Traditional spacecraft operations rely on scripted command sequences for actuation and data collection. This imposes a slow Earth-driven command cycle with limited bandwidth, intermittent communications and latencies of minutes to hours. The resulting delays affect nearly every aspect of mission design and operations. For surface rovers, simple spacecraft activities such as short drives and arm placement can take days to execute. In deep space, comet and asteroid flybys generally rely on preplanned command sequences during these short, critical time intervals.
Automatic image analysis, including scene parsing and object detection, can permit simple onboard science decision-making that mitigates the need for micromanagement from Earth. It enables wholly new operations strategies with faster reconnaissance by rovers and autonomous monitoring by orbiters. This talk will survey several application areas where mainstream computer vision techniques have started to benefit space exploration. Automated hyperspectral image analysis has enabled the Earth Observing One spacecraft to construct summary maps of surface material from its remote observations. The AEGIS target detection system allows the Mars Exploration Rover Opportunity to identify and react to unanticipated science features. Other techniques in development range from geologic scene parsing to detection of transient plume outbursts during comet missions. In each case, insights gleaned from terrestrial computer vision research result in tangible
improvements to mission science yield.
Host: Alexei Efros
Appointments: Bernardo Pires (email@example.com)
David Thompson received a PhD from the Robotics Institute, where he participated in the FRC Life in the Atacama and Science Autonomy projects. These days he is a research technologist at the Jet Propulsion Laboratory, California Institute of Technology. He works in the Machine Learning and Instrument Autonomy group, and focuses on pattern recognition applications for autonomous exploration and science data analysis. David leads several JPL research projects including the TextureCam “smart camera,” target detection for primitive bodies exploration, and spectroscopy for the NASA Orbiting Carbon Observatory 2 (OCO-2) mission. His code has guided autonomous robot sciencecraft fielded to North America, South America, the Atlantic Ocean, Low Earth Orbit, and the surface of Mars.