Loading Events

PhD Thesis Proposal

December

11
Thu
Heather Knight Carnegie Mellon University
Thursday, December 11
10:00 am to 12:00 am
Expressive Motion for Low Degree of Freedom Robots

Event Location: NSH 3305

Abstract: As social and collaborative robots move into everyday life, the need for algorithms enabling their acceptance becomes critical. The proposed work will create a framework for Expressive Motion generation that allows a robot to use modified task motions to communicate its state naturally and efficiently, including mental state, task state, and social relationships. There is a saying that 95% of communication is body language, but few robots today make effective use of that ubiquitous channel. By adhering to expressive motions that people innately use and understand, robot behaviors will become legible to the general populace and potentially more charismatic.

The hypothesis of this Thesis is that it is possible to layer dynamic motion features on pre-existing robot task behaviors such that the robot can legibly communicate a variety of states. Typically, researchers build instances of expressive motion into individual robot behaviors (which is not scalable), or use an independent channel such as lights or facial expressions that do not interfere with the robot’s task.  What is unique about this work is that we use the same modality to do both task and expression: the robot’s joint and whole-body motion. While this is not the only way for a robot to communicate expression, Expressive Motion is a channel available to all moving machines, which can work in tandem with any additional communication modalities.

Our methodological approach is to operationalize a well-known technique from acting training that specifies categorical features of motion expressions, namely, the Laban Effort System. We will also include certain contextual specifications provided by Bogart’s nine Viewpoints of Space and Time. The Laban Effort System describes a four-dimensional state space of Time, Weight, Space and Flow, which we will implement as dynamic features of a robot’s motion. As additional inputs to our Expressive Motion methodology, we will use the specifications of a robot’s degrees of freedom (e.g. rotations and translation) to choose the best features to represent each Effort. We will also include consideration of context such as the robot’s task requirements and relevant Viewpoint features. Sample Viewpoints include Architecture, the spatial dimensions within which an agent performs its motions, and Kinesthetic Response, the temporal reaction to other moving agents or entities in one’s periphery.

The technical contributions of this work will include: 1) A methodology to layer Expressive Motion features onto robot task behavior that is fully specified for low degree of freedom robots; 2) A methodology for selecting, exploring and making generalizations about how to map these motion features to particular robot state communications. We will use experimental studies of human-robot interaction to evaluate the legibility, attributions and impact of these technical components, and a naturalistic deployment on the CoBot robot to assess our Expressive Motion methodology in a real world setting.

Committee:Reid Simmons, Chair
Manuela Veloso
Aaron Steinfeld
Guy Hoffman, Interdisciplinary Center Herzliya