///Self-Supervised Learning on Mobile Robots Using Acoustics, Vibration, and Visual Models to Build Rich Semantic Terrain Maps
Loading Events

PhD Thesis Defense

November

18
Mon
Jacqueline Libby PhD Student Robotics Institute,
Carnegie Mellon University
Monday, November 18
1:00 pm
- 2:00 pm
NSH 3305
Self-Supervised Learning on Mobile Robots Using Acoustics, Vibration, and Visual Models to Build Rich Semantic Terrain Maps

Abstract:
Humans and robots would benefit from having rich semantic maps of the terrain in which they operate. Mobile robots equipped with sensors and perception software could build such maps as they navigate through a new environment. This information could then be used by humans or robots for better localization and path planning, as well as a variety of other tasks. However, it is hard to build good semantic maps without a great deal of human effort and robot time. Others have addressed this problem, but they do not provide a high level of semantic richness, and in some cases their approaches require extensive human data labeling and robot driving time.

We use a combination of better sensors and features, both proprioceptive and exteroceptive, and self-supervised learning to solve this problem. We enhance proprioception by exploring the use of new sensing modalities such as sound and vibration, and in turn we increase the number and variety of terrain types that can be estimated. We build a supervised proprioceptive multiclass model that can predict up to seven terrain classes. The proprioceptive predictions are then used as labels to train a self-supervised exteroceptive model from camera data. The exteroceptive model uses up-to-date vision learning techniques. This exteroceptive model can then estimate those same terrain types more reliably in new environments. The exteroceptive semantic terrain predictions can then be spatially registered into a larger map of the surrounding environment. 3d point clouds from rolling/tilting ladar are used to register the proprioceptive and exteroceptive data, as well as to register the resulting exteroceptive predictions into the larger map. Our claim is that self-supervised learning makes the exteroception more reliable since it can be automatically retrained for new locations without human supervision. We conducted experiments to support this claim by collecting data sets from different geographical environments and then comparing classification accuracies. Our results show that our self-supervised learning approach is able to outperform supervised visual learning techniques.

More Information

Thesis Committee Members:
Anthony Stentz, Chair
Martial Hebert
David Wettergreen
Larry H. Matthies, Jet Propulsion Laboratory