Wide aperture imaging sonar reconstruction using generative models - Robotics Institute Carnegie Mellon University

Wide aperture imaging sonar reconstruction using generative models

E. Westman and M. Kaess
Conference Paper, Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 8067 - 8074, November, 2019

Abstract

In this paper we propose a new framework for reconstructing underwater surfaces from wide aperture imaging sonar sequences. We demonstrate that when the leading object edge in each sonar image can be accurately triangulated in 3D, the remaining surface may be “filled in” using a generative sensor model. This process generates a full
three-dimensional point cloud for each image in the sequence. We propose integrating these surface measurements into a cohesive global map using a truncated signed distance field (TSDF) to fuse the point clouds generated by each image. This allows for reconstructing surfaces with significantly fewer sonar images and viewpoints than previous methods. The proposed method is evaluated by reconstructing a mock-up piling structure and a real world underwater piling, in a test tank environment and in the field, respectively. Our surface reconstructions are quantitatively compared to
ground-truth models and are shown to be more accurate than previous state-of-the-art algorithms.

BibTeX

@conference{Westman-2019-120939,
author = {E. Westman and M. Kaess},
title = {Wide aperture imaging sonar reconstruction using generative models},
booktitle = {Proceedings of (IROS) IEEE/RSJ International Conference on Intelligent Robots and Systems},
year = {2019},
month = {November},
pages = {8067 - 8074},
}