Toward fieldable human-scale mobile manipulation using RoMan - Robotics Institute Carnegie Mellon University

Toward fieldable human-scale mobile manipulation using RoMan

Chad C. Kessens, Jonathan Fink, Arnon Hurwitz, Matthew Kaplan, Philip R. Osteen, Trevor Rocks, John Rogers, Ethan Stump, Long Quang, Michael DiBlasi, Mark Gonzalez, Dilip Patel, Jaymit Patel, Shiyani Patel, Matthew Weiker, Joseph Bowkett, Renaud Detry, Sisir Karumanchi, Joel Burdick, Larry Matthies, Yash Oza, Aditya Agarwal, Andrew Dornbush, Maxim Likhachev, Karl Schmeckpeper, Kostas Daniilidis, Ajinkya Kamat, Sanjiban Choudhury, Aditya Mandalika, and Siddhartha Srinivasa
Conference Paper, Proceedings of SPIE Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, Vol. 11413, April, 2020

Abstract

Robots are ideal surrogates for performing tasks that are dull, dirty, and dangerous. To fully achieve this ideal, a robotic teammate should be able to autonomously perform human-level tasks in unstructured environments where we do not want humans to go. In this paper, we take a step toward realizing that vision by introducing the integration of state of the art advancements in intelligence, perception, and manipulation on the RoMan (Robotic Manipulation) platform. RoMan is comprised of two 7 degree of freedom (DoF) limbs connected to a 1 DoF torso and mounted on a tracked base. Multiple lidars are used for navigation, and a stereo depth camera visualizes point clouds for grasping. Each limb has a 6 DoF force-torque sensor at the wrist, with a dexterous 3-finger gripper on one limb and a stronger 4-finger claw-like hand on the other. Tasks begin with an operator specifying a mission type, a desired final destination for the robot, and a general region where the robot should look for grasps. All other portions of the task are completed autonomously. This includes navigation, object identification and pose estimation (if the object is known) via deep learning or perception through search, fine maneuvering, grasp planning via grasp library, arm motion planning, and manipulation planning (e.g. dragging if the object is deemed too heavy to freely lift). Finally, we present initial test results on two notional tasks: clearing a road of debris such as a heavy tree or a pile of unknown light debris, and opening a hinged container to retrieve a bag inside it.

BibTeX

@conference{Kessens-2020-126421,
author = {Chad C. Kessens and Jonathan Fink and Arnon Hurwitz and Matthew Kaplan and Philip R. Osteen and Trevor Rocks and John Rogers and Ethan Stump and Long Quang and Michael DiBlasi and Mark Gonzalez and Dilip Patel and Jaymit Patel and Shiyani Patel and Matthew Weiker and Joseph Bowkett and Renaud Detry and Sisir Karumanchi and Joel Burdick and Larry Matthies and Yash Oza and Aditya Agarwal and Andrew Dornbush and Maxim Likhachev and Karl Schmeckpeper and Kostas Daniilidis and Ajinkya Kamat and Sanjiban Choudhury and Aditya Mandalika and Siddhartha Srinivasa},
title = {Toward fieldable human-scale mobile manipulation using RoMan},
booktitle = {Proceedings of SPIE Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II},
year = {2020},
month = {April},
volume = {11413},
}