3D Grid Maps for Mobile Robot Perception - Robotics Institute Carnegie Mellon University

3D Grid Maps for Mobile Robot Perception

Portrait of 3D Grid Maps for Mobile Robot Perception
This Project is no longer active.

Vector lists offer a compact representation of simple diagrams, and all early computer displays drew point-to-point vectors. The size of vector lists grows unboundedly with image complexity, however. Today, all computer displays are raster based. A raster represents any image at all at fixed, albeit high, cost. Rasters became compelling when computer memories grew large enough to hold them and speeds high enough to fill them rapidly.

Most sonar-based research robots of the 1980s built 2D vector maps of their world. Walls, doors and major obstacles were compactly represented, but maps became unwieldy and unreliable in cluttered regions. Today most mapping robots use 2D grids, which can represent arbitrary layouts in shades of occupancy at fixed cost.

Recent computer speed and memory gains enable robot mapping in 3D. Surface-based descriptions dominate, and are efficient for simple scenes, but strained in clutter. Since 1992 we’ve been developing a 3D grid approach that loves clutter. Our latest maps, with 16mm grid cells filled with occupancy evidence weights from trinocular stereoscopy, occupy hundreds of megabytes. 1,000 MIPS produces them in near real time. They acquire the original scene’s colors as a side effect of a learning process. Simulated run throughs of the color grids can be mistaken for camera imagery of the real scene. We think the technique can guide commercial robots this decade.

current head

current staff

current contact

past staff

  • Scott Crosby
  • Jesse Easudes