///Self-Supervised Learning on Mobile Robots Using Acoustics, Vibration, and Visual Models to Build Rich Semantic Terrain Maps
Loading Events
This event has passed.

Field Robotics Center Seminar


Jacqueline Libby PhD Student Robotics Institute,
Carnegie Mellon University
Wednesday, September 18
1:00 pm
- 2:00 pm
3305 Newell-Simon Hall
Self-Supervised Learning on Mobile Robots Using Acoustics, Vibration, and Visual Models to Build Rich Semantic Terrain Maps

Humans and robots would benefit from having rich semantic maps of the terrain in which they operate.  Mobile robots equipped with sensors and perception software could build such maps as they navigate through a new environment.  This information could then be used by humans or robots for better localization and path planning, as well as a variety of other tasks.  However, it is hard to build good semantic maps without a great deal of human effort and robot time.  Others have addressed this problem, but they don’t provide a high level of semantic richness, and in some cases their approaches require extensive human data labeling and robot driving time.

We use a combination of better sensors and features, both proprioceptive and exteroceptive, and self-supervised learning to solve this problem.  We enhance proprioception by exploring the use of new sensing modalities such as sound and vibration, and in turn we increase the number and variety of terrain types that can be estimated.  We build a supervised proprioceptive multiclass model that can predict up to seven terrain classes.  The proprioceptive predictions are then used as labels to train a self-supervised exteroceptive model from camera data.  The exteroceptive model uses up-to-date vision learning techniques.  This exteroceptive model can then estimate those same terrain types more reliably in new environments.  The exteroceptive semantic terrain predictions can then be spatially registered into a larger map of the surrounding environment.  3d point clouds from rolling/tilting ladar are used to register the proprioceptive and exteroceptive data, as well as to register the resulting exteroceptive predictions into the larger map.  Our claim is that self-supervised learning makes the exteroception more reliable since it can be automatically retrained for new locations without human supervision.  We conducted experiments to support this claim by collecting data sets from different geographical environments and then comparing classification accuracies.  Our results show that our self-supervised learning approach is able to outperform supervised visual learning techniques.

Speaker Bio:
Jacqueline Libby is a PhD candidate in the Robotics Institute.  Her research interests are focused on developing real-world robotics systems.  At her time in the Robotics Institute, she has worked with the best field roboticists to explore how complex robotic systems can interact with the world in complex outdoor environments.  She hopes in the future to apply what she has learned in the arenas of environmental sustainability or medicine.  Her thesis work (as described in the abstract above) is focused on sensor fusion techniques from a variety of sensing modalities as a way of enhancing robot perception.