Omnivergent Stereo

Heung-Yeung Shum, Adam Kalai, and Steven Seitz
Proc. Seventh International Conference on Computer Vision (ICCV '99), 1999.


Download
  • Adobe portable document format (pdf) (1MB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
The notion of a virtual sensor for optimal 3D reconstruction is introduced. Instead of planar perspective images that collect many rays at a fixed viewpoint, omnivergent cameras collect a small number of rays at many different viewpoints. The resulting 2D manifold of rays are arranged into two multiple-perspective images for stereo reconstruction. We call such images omnivergent images and the process of reconstructing the scene from such images omnivergent stereo This procedure is shown to produce 3D scene models with minimal reconstruction error, due to the fact that for any point in the 3D scene, two rays with maximum vergence angle can be found in the omnivergent images. Furthermore, omnivergent images are shown to have horizontal epipolar lines, enabling the application of traditional stereo matching algorithms, without modification. Three types of omnivergent virtual sensors are presented: spherical omnivergent cameras, center-strip cameras and dual-strip cameras.

Notes

Text Reference
Heung-Yeung Shum, Adam Kalai, and Steven Seitz, "Omnivergent Stereo," Proc. Seventh International Conference on Computer Vision (ICCV '99), 1999.

BibTeX Reference
@inproceedings{Shum_1999_2841,
   author = "Heung-Yeung Shum and Adam Kalai and Steven Seitz",
   title = "Omnivergent Stereo",
   booktitle = "Proc. Seventh International Conference on Computer Vision (ICCV '99)",
   year = "1999",
}