EdgeSonic: Image Feature Sonification for the Visually Impaired - Robotics Institute Carnegie Mellon University

EdgeSonic: Image Feature Sonification for the Visually Impaired

Tsubasa Yoshida, Kris M. Kitani, Serge Belongie, Kevin Schlei, and Hideki Koike
Conference Paper, Proceedings of 2nd Augmented Human International Conference (AH '11), March, 2011

Abstract

We propose a framework to aid a visually impaired user to recognize objects in an image by sonifying image edge features and distance-to-edge maps. Visually impaired people usually touch objects to recognize their shape. However, it is difficult to recognize objects printed on flat surfaces or objects that can only be viewed from a distance, solely with our haptic senses. Our ultimate goal is to aid a visually impaired user to recognize basic object shapes, by transposing them to aural information. Our proposed method provides two types of image sonification: (1) local edge gradient sonification and (2) sonification of the distance to the closest image edge. Our method was implemented on a touch-panel mobile device, which allows the user to aurally explore image context by sliding his finger across the image on the touch screen. Preliminary experiments show that the combination of local edge gradient sonification and distance-to-edge sonification are effective for understanding basic line drawings. Furthermore, our tests show a significant improvement in image understanding with the introduction of proper user training.

BibTeX

@conference{Yoshida-2011-109825,
author = {Tsubasa Yoshida and Kris M. Kitani and Serge Belongie and Kevin Schlei and Hideki Koike},
title = {EdgeSonic: Image Feature Sonification for the Visually Impaired},
booktitle = {Proceedings of 2nd Augmented Human International Conference (AH '11)},
year = {2011},
month = {March},
}