Multi-Modal Transfer Learning for Grasping Transparent and Specular Objects - Robotics Institute Carnegie Mellon University

Multi-Modal Transfer Learning for Grasping Transparent and Specular Objects

Thomas Weng, Amith Pallankize, Yimin Tang, Oliver Kroemer, and David Held
Journal Article, IEEE Robotics and Automation Letters, Vol. 5, No. 3, pp. 3791 - 3798, July, 2020

Abstract

State-of-the-art object grasping methods rely on depth sensing to plan robust grasps, but commercially available depth sensors fail to detect transparent and specular objects. To improve grasping performance on such objects, we introduce a method for learning a multi-modal perception model by bootstrapping from an existing uni-modal model. This transfer learning approach requires only a pre-existing uni-modal grasping model and paired multi-modal image data for training, foregoing the need for ground-truth grasp success labels nor real grasp attempts. Our experiments demonstrate that our approach is able to reliably grasp transparent and reflective objects. Video and supplementary material are available at https://sites.google.com/view/transparent-specular-grasping.

BibTeX

@article{Weng-2020-123091,
author = {Thomas Weng and Amith Pallankize and Yimin Tang and Oliver Kroemer and David Held},
title = {Multi-Modal Transfer Learning for Grasping Transparent and Specular Objects},
journal = {IEEE Robotics and Automation Letters},
year = {2020},
month = {July},
volume = {5},
number = {3},
pages = {3791 - 3798},
}