/Visual SLAM with a Multi-Camera Rig

Visual SLAM with a Multi-Camera Rig

Michael Kaess and Frank Dellaert
Journal Article, Carnegie Mellon University, College of Computing, Georgia Institute of Technology - Technical Report GIT-GVU-06-06, February, 2006

Download Publication (PDF)

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.


Camera-based simultaneous localization and mapping or visual SLAM has received much attention recently. Typically single cameras, multiple cameras in a stereo setup or omni-directional cameras are used. We propose a different approach, where multiple cameras can be mounted on a robot in an arbitrary configuration. Allowing the cameras to face in different directions yields better constraints than single cameras or stereo setups can provide, simplifying the reconstruction of large-scale environments. And in contrast to omni-directional sensors, the available resolution can be focused on areas of interest depending on the application. We describe a sparse SLAM approach that is suitable for real-time reconstruction from such multi-camera configurations. We have implemented the system and show experimental results in a large-scale environment, using a custom made eight-camera rig.

BibTeX Reference
author = {Michael Kaess and Frank Dellaert},
title = {Visual SLAM with a Multi-Camera Rig},
journal = {College of Computing, Georgia Institute of Technology - Technical Report GIT-GVU-06-06},
year = {2006},
month = {February},