NavCog: A Navigational Cognitive Assistant for the Blind - Robotics Institute Carnegie Mellon University

NavCog: A Navigational Cognitive Assistant for the Blind

Dragan Ahmetovic, Cole Gleason, Chengxiong Ruan, Kris M. Kitani, Hironobu Takagi, and Chieko Asakawa
Conference Paper, Proceedings of 18th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '16), pp. 90 - 99, September, 2016

Abstract

Turn-by-turn navigation is a useful paradigm for assisting people with visual impairments during mobility as it reduces the cognitive load of having to simultaneously sense, localize and plan. To realize such a system, it is necessary to be able to automatically localize the user with sufficient accuracy, provide timely and efficient instructions and have the ability to easily deploy the system to new spaces.

We propose a smartphone-based system that provides turn-by-turn navigation assistance based on accurate real-time localization over large spaces. In addition to basic navigation capabilities, our system also informs the user about nearby points-of-interest (POI) and accessibility issues (e.g., stairs ahead). After deploying the system on a university campus across several indoor and outdoor areas, we evaluated it with six blind subjects and showed that our system is capable of guiding visually impaired users in complex and unfamiliar environments.

BibTeX

@conference{Ahmetovic-2016-109801,
author = {Dragan Ahmetovic and Cole Gleason and Chengxiong Ruan and Kris M. Kitani and Hironobu Takagi and Chieko Asakawa},
title = {NavCog: A Navigational Cognitive Assistant for the Blind},
booktitle = {Proceedings of 18th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '16)},
year = {2016},
month = {September},
pages = {90 - 99},
}