Adapting Preshaped Grasping Movements using Vision Descriptors

Oliver Kroemer, Renaud Detry, Justus Piater and Jan Peters
Conference Paper, International Conference on the Simulation of Adaptive Behavior (SAB), January, 2010

Download Publication

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.


Grasping is one of the most important abilities needed for future service robots. In the task of picking up an object from between clutter, traditional robotics approaches would determine a suitable grasping point and then use a movement planner to reach the goal. The planner would require precise and accurate information about the environment and long computation times, both of which are often not available. Therefore, methods are needed that execute grasps robustly even with imprecise information gathered only from standard stereo vision. We propose techniques that reactively modify the robot’s learned motor primitives based on non-parametric potential fields entered on the Early Cognitive Vision descriptors. These allow both obstacle avoidance, and the adapting of finger motions to the object’s local geometry. The methods were tested on a real robot, where they led to improved adaptability and quality of grasping actions.

author = {Oliver Kroemer and Renaud Detry and Justus Piater and Jan Peters},
title = {Adapting Preshaped Grasping Movements using Vision Descriptors},
booktitle = {International Conference on the Simulation of Adaptive Behavior (SAB)},
year = {2010},
month = {January},
} 2019-03-12T13:55:05-04:00