Advancing autonomy in underwater manipulation with a deep learning visual dataset collected in cold seep environments of the Costa Rica Active Margin - Robotics Institute Carnegie Mellon University

Advancing autonomy in underwater manipulation with a deep learning visual dataset collected in cold seep environments of the Costa Rica Active Margin

Gideon Billings, Matthew Johnson-Roberson, and Richard Camilli
Conference Paper, Proceedings of American Geophysical Union Fall Meeting Abstracts, December, 2019

Abstract

A Remotely Operated underwater Vehicle (ROV) outfitted with a manipulator is often the most viable means of collecting samples from underwater environments or making precisely located measurements with a probe. Outfitting ROVs with the capability to autonomously perform common manipulation tasks would greatly simplify the operating procedure, reduce personnel requirements, lower risks related to vehicle damage during manipulation, and improve the efficiency of performing meticulous manipulation tasks. As with ROV pilots, an autonomous manipulation system must rely on feedback from a suite of cameras and sensors to understand and interact with the scene. Reliable computer vision methods are the key to scene understanding and achieving autonomy in underwater manipulation. The field of computer vision has advanced rapidly in recent years with the onset of deep learning methods. However, while deep learning has greatly advanced the state of-the-art in terrestrial based visual methods, the challenges of collecting underwater datasets necessary for training and testing these methods has hindered progress in leveraging these advancements for underwater applications. In this work, we contribute an underwater visual dataset of graspable handles with annotated ground truth pose, collected in cold seep environments in the Costa Rica Active Margin. The handle objects in the dataset are representative of common handle types attached to tools for ROV manipulation. The dataset includes images from a fisheye camera mounted on the wrist of an ROV manipulator and synchronized stereo imagery from a camera pair fixed on the ROV frame which view the manipulation workspace. This dataset will also be augmented with tank imagery of the same handle objects set, along with rendered synthetic images using textured object models. This dataset allows for development and testing of underwater visual methods for object detection and pose estimation in a diverse set of natural seafloor environments of scientific relevance. We also introduce an easy and efficient method for collecting images of underwater scenes containing known objects using April Tags dispersed through the scene to ground the camera pose, and we contribute an open source tool for annotating the 6D object poses and bounding boxes in the collected image sequences.

BibTeX

@conference{Billings-2019-130108,
author = {Gideon Billings and Matthew Johnson-Roberson and Richard Camilli},
title = {Advancing autonomy in underwater manipulation with a deep learning visual dataset collected in cold seep environments of the Costa Rica Active Margin},
booktitle = {Proceedings of American Geophysical Union Fall Meeting Abstracts},
year = {2019},
month = {December},
}