SilhoNet-Fisheye: Adaptation of A ROI-Based Object Pose Estimation Network to Monocular Fisheye Images - Robotics Institute Carnegie Mellon University

SilhoNet-Fisheye: Adaptation of A ROI-Based Object Pose Estimation Network to Monocular Fisheye Images

Gideon Billings and M. Johnson-Roberson
Journal Article, IEEE Robotics and Automation Letters, Vol. 5, No. 3, pp. 4241 - 4248, July, 2020

Abstract

There has been much recent interest in deep learning methods for monocular image based object pose estimation. While object pose estimation is an important problem for autonomous robot interaction with the physical world, and the application space for monocular-based methods is expansive, there has been little work on applying these methods with fisheye imaging systems. Also, little exists in the way of annotated fisheye image datasets on which these methods can be developed and tested. The research landscape is even more sparse for object detection methods applied in the underwater domain, fisheye image based or otherwise. In this work, we present a novel framework for adapting a ROI-based 6D object pose estimation method to work on full fisheye images. The method incorporates the gnomic projection of regions of interest from an intermediate spherical image representation to correct for the fisheye distortions. Further, we contribute a fisheye image dataset, called UWHandles, collected in natural underwater environments, with 6D object pose and 2D bounding box annotations.

BibTeX

@article{Billings-2020-130124,
author = {Gideon Billings and M. Johnson-Roberson},
title = {SilhoNet-Fisheye: Adaptation of A ROI-Based Object Pose Estimation Network to Monocular Fisheye Images},
journal = {IEEE Robotics and Automation Letters},
year = {2020},
month = {July},
volume = {5},
number = {3},
pages = {4241 - 4248},
}