ShapeMap 3-D: Efficient shape mapping through dense touch and vision - Robotics Institute Carnegie Mellon University

ShapeMap 3-D: Efficient shape mapping through dense touch and vision

Conference Paper, Proceedings of (ICRA) International Conference on Robotics and Automation, pp. 7073 - 7080, May, 2022

Abstract

Knowledge of 3-D object shape is of great importance to robot manipulation tasks, but may not be readily available in unstructured environments. While vision is often occluded during robot-object interaction, high-resolution tactile sensors can give a dense local perspective of the object. However, tactile sensors have limited sensing area and the shape representation must faithfully approximate non-contact areas. In addition, a key challenge is efficiently incorporating these dense tactile measurements into a 3-D mapping framework. In this work, we propose an incremental shape mapping method using a GelSight tactile sensor and a depth camera. Local shape is recovered from tactile images via a learned model trained in simulation. Through efficient inference on a spatial factor graph informed by a Gaussian process, we build an implicit surface representation of the object. We demonstrate visuo-tactile mapping in both simulated and real-world experiments, to incrementally build 3-D reconstructions of household objects.

BibTeX

@conference{Suresh-2022-134123,
author = {Sudharshan Suresh and Zilin Si and Joshua Mangelson and Wenzhen Yuan and Michael Kaess},
title = {ShapeMap 3-D: Efficient shape mapping through dense touch and vision},
booktitle = {Proceedings of (ICRA) International Conference on Robotics and Automation},
year = {2022},
month = {May},
pages = {7073 - 7080},
}