/NavCog3: An Evaluation of a Smartphone-Based Blind Indoor Navigation Assistant with Semantic Features in a Large-Scale Environment

NavCog3: An Evaluation of a Smartphone-Based Blind Indoor Navigation Assistant with Semantic Features in a Large-Scale Environment

Daisuke Sato, Uran Oh, Kakuya Naito, Hironobu Takagi, Kris Kitani and Chieko Asakawa
Conference Paper, Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 270-279, November, 2017

Download Publication (PDF)

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract

Navigating in unfamiliar environments is challenging for most people, especially for individuals with visual impairments. While
many personal navigation tools have been proposed to enable independent indoor navigation, they have insufficient accuracy (e.g., 5–10 m), do not provide semantic features about surroundings (e.g., doorways, shops, etc.), and may require specialized devices to function. Moreover, the deployment of many systems is often only evaluated in constrained scenarios, which may not precisely reflect the performance in the real world. Therefore, we have designed and implemented NavCog3, a smartphone-based indoor navigation assistant that has been evaluated in a 21,000 square-meter shopping mall. In addition to turn-by-turn instructions, it provides information on landmarks (e.g., tactile paving) and points of interests nearby. We first conducted a controlled study with 10 visually impaired users to assess localization accuracy and the perceived usefulness
of semantic features. To understand the usability of the app in a real-world setting, we then conducted another study with 43
participants with visual impairments where they could freely navigate in the shopping mall using NavCog3. Our findings suggest that NavCog3 can open a new opportunity for users with visual impairments to independently find and visit large and complex places with confidence.

Notes
Associated Project - Indoor People Localization, Associated Project - NavCog, Associated Lab - Cognitive Assistance Lab

BibTeX Reference
@conference{Oh-2017-103571,
author = {Daisuke Sato and Uran Oh and Kakuya Naito and Hironobu Takagi and Kris Kitani and Chieko Asakawa},
title = {NavCog3: An Evaluation of a Smartphone-Based Blind Indoor Navigation Assistant with Semantic Features in a Large-Scale Environment},
booktitle = {Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility},
year = {2017},
month = {November},
pages = {270-279},
publisher = {ACM},
address = {ACM New York, NY, USA},
keywords = {Indoor navigation, visual impairments, points of interest, voice-based interaction, user evaluation},
}
2018-02-07T12:56:14+00:00