Using Collected Vision and Tactile Sensors for Visual Servoing and Localization - Robotics Institute Carnegie Mellon University

Using Collected Vision and Tactile Sensors for Visual Servoing and Localization

Journal Article, IEEE Robotics and Automation Letters, Vol. 7, No. 2, pp. 3427 - 3434, April, 2022

Abstract

Coordinating proximity and tactile imaging by collocating cameras with tactile sensors can 1) provide useful information before contact such as object pose estimates and visually servo a robot to a target with reduced occlusion and higher resolution compared to head-mounted or external depth cameras, 2) simplify the contact point and pose estimation problems and help tactile sensing avoid erroneous matches when a surface does not have significant texture or has repetitive texture with many possible matches, and 3) use tactile imaging to further refine contact point and object pose estimation. We demonstrate our results with objects that have more surface texture than most objects in standard manipulation datasets. We learn that optic flow needs to be integrated over a substantial amount of camera travel to be useful in predicting movement direction. Most importantly, we also learn that state of the art vision algorithms do not do a good job localizing tactile images on object models, unless a reasonable prior can be provided from collocated cameras.

BibTeX

@article{Yuan-2022-131727,
author = {Arkadeep Narayan Chaudhury and Timothy Man and Wenzhen Yuan and Christopher G. Atkeson},
title = {Using Collected Vision and Tactile Sensors for Visual Servoing and Localization},
journal = {IEEE Robotics and Automation Letters},
year = {2022},
month = {April},
volume = {7},
number = {2},
pages = {3427 - 3434},
keywords = {Cameras , Robot vision systems , Robots , Tactile sensors , Location awareness , Optical sensors , Optical imaging},
}