|A robot with a hand-mounted depth sensor scans a scene. When the robot's joint angles are not known with certainty, how can it best reconstruct the scene? In this work, we simultaneously estimate the joint angles of the robot and reconstruct a dense volumetric model of the scene. In this way, we perform simultaneous localization and mapping in the configuration space of the robot, rather than in the pose space of the camera. We show using simulations and robot experiments that our approach greatly reduces both 3D reconstruction error and joint angle error over simply using the forward kinematics. Unlike other approaches, ours directly reasons about robot joint angles, and can use these to constrain the pose of the sensor. Because of this, it is more robust to missing or ambiguous depth data than approaches that are unconstrained by the robot's kinematics.|
|SLAM, Mapping, Kinematics, Manipulation, Computer Vision|
Sponsor: Toyota; ONR; NSF
Associated Center(s) / Consortia: Quality of Life Technology Center, National Robotics Engineering Center, Field Robotics Center, and Center for the Foundations of Robotics
Associated Lab(s) / Group(s): Personal Robotics
Note: This was part of ICRA RA-L, meaning its in the ICRA conference, and also published in the RA journal.
|Matthew Klingensmith, Siddhartha Srinivasa, and Michael Kaess, "Articulated Robot Motion for Simultaneous Localization and Mapping (ARM-SLAM)," IEEE Robotics and Automation - Letters, January, 2016.|
author = "Matthew Klingensmith and Siddhartha Srinivasa and Michael Kaess",
editor = "Antonio Bicchi",
title = "Articulated Robot Motion for Simultaneous Localization and Mapping (ARM-SLAM)",
journal = "IEEE Robotics and Automation - Letters",
month = "January",
year = "2016",
Notes = "This was part of ICRA RA-L, meaning its in the ICRA conference, and also published in the RA journal."
|The Robotics Institute is part of the School of Computer Science, Carnegie Mellon University.|
Contact Us | Update Instructions