/Effective Non-Verbal Communication for Mobile Robots using Expressive Lights

Effective Non-Verbal Communication for Mobile Robots using Expressive Lights

Kim Baraka
Master's Thesis, Tech. Report, CMU-RI-TR-16-12, Robotics Institute, Carnegie Mellon University, May, 2016

Download Publication (PDF)

Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.


Mobile robots are entering our daily lives and are expected to carry out tasks with, for, and around humans in diverse environments. Due to their mobility and the diversity of their states while executing their tasks, revealing robot state information during task execution is crucial to enable effective human-robot collaboration, better trust in the robot, and more engaging human-robot social interactions. Verbal communication combined with on-screen display is currently the typical mechanism for communicating with humans on such robots. However, these communication mechanisms may fail for mobile robots due to spatio-temporal limitations. To remedy these problems, in this thesis, we use expressive lights as a primary modality to communicate to humans useful information about the robot’s state. Such lights are persistent, non-invasive, and visible at a distance, unlike other existing modalities, which they can complement or replace when impotent. Current light arrays provide us with a very large animation space, which we simplify by considering a handful of parametrized signal shapes that maintain great animation design flexibility. We present a formalism for light animation control and a mapping architecture from our representation of robot state to our parametrized light animation space. The mapping we propose generalizes to multiple light strips and even other expression modalities. We also show how this mapping can adapt, through a personalization algorithm, to temporal preferences of individuals engaging in long-term interactions with the robot. We implement our framework on CoBot, a mobile multi-floor service robot, and evaluate its validity through several user studies. Our study results show that carefully designed expressive lights on a mobile robot help humans better understand robot states and actions and can have a positive impact on people’s behavior in the real world.

BibTeX Reference
author = {Kim Baraka},
title = {Effective Non-Verbal Communication for Mobile Robots using Expressive Lights},
year = {2016},
month = {May},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-16-12},