Homography-Based Deep Visual Servoing Methods for Planar Grasps

Austin S. Wang, Wuming Zhang, Daniel Troniak, Jacky Liang and Oliver Kroemer
Conference Paper, Proceedings of (ICRA) International Conference on Robotics and Automation, November, 2019

Download Publication

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract

We propose a visual servoing framework for learn- ing to improve grasps of objects. RGB and depth images from grasp attempts are collected using an automated data collection process. The data is then used to train a Grasp Quality Network (GQN) that predicts the outcome of grasps from visual information. A grasp optimization pipeline uses homography models with the trained network to optimize the grasp success rate. We evaluate and compare several algorithms for adjusting the current gripper pose based on the current observation from a gripper-mounted camera to perform visual servoing. Evaluations in both simulated and hardware environments show considerable improvement in grasp robustness with models trained using less than 30K grasp trials. Success rates for grasping novel objects unseen during training increased from 18.5% to 81.0% in simulation, and from 17.8% to 78.0% in the real world.


@conference{Wang-2019-118447,
author = {Austin S. Wang and Wuming Zhang and Daniel Troniak and Jacky Liang and Oliver Kroemer},
title = {Homography-Based Deep Visual Servoing Methods for Planar Grasps},
booktitle = {Proceedings of (ICRA) International Conference on Robotics and Automation},
year = {2019},
month = {November},
} 2019-11-08T10:20:37-05:00