VizMap: Accessible Visual Information Through Crowdsourced Map Reconstruction - Robotics Institute Carnegie Mellon University

VizMap: Accessible Visual Information Through Crowdsourced Map Reconstruction

Cole Gleason, Anhong Guo, Gierad Laput, Kris M. Kitani, and Jeffrey P. Bigham
Conference Paper, Proceedings of 18th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '16), pp. 273 - 274, October, 2016

Abstract

When navigating indoors, blind people are often unaware of key visual information, such as posters, signs, and exit doors. Our VizMap system uses computer vision and crowdsourcing to collect this information and make it available non-visually. VizMap starts with videos taken by on-site sighted volunteers and uses these to create a 3D spatial model. These video frames are semantically labeled by remote crowd workers with key visual information. These semantic labels are located within and embedded into the reconstructed 3D model, forming a query-able spatial representation of the environment. VizMap can then localize the user with a photo from their smartphone, and enable them to explore the visual elements that are nearby. We explore a range of example applications enabled by our reconstructed spatial representation. With VizMap, we move towards integrating the strengths of the end user, on-site crowd, online crowd, and computer vision to solve a long-standing challenge in indoor blind exploration.

BibTeX

@conference{Gleason-2016-109797,
author = {Cole Gleason and Anhong Guo and Gierad Laput and Kris M. Kitani and Jeffrey P. Bigham},
title = {VizMap: Accessible Visual Information Through Crowdsourced Map Reconstruction},
booktitle = {Proceedings of 18th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '16)},
year = {2016},
month = {October},
pages = {273 - 274},
}