Bimodal expression of emotion by face and voice - Robotics Institute Carnegie Mellon University

Bimodal expression of emotion by face and voice

Jeffrey Cohn and G. S. Katz
Workshop Paper, MULTIMEDIA '98 Workshop on Face / Gesture Recognition and Their Applications, pp. 41 - 44, September, 1998

Abstract

A goal of research in human-computer interaction is computer systems that can recognize and understand nonverbal communication. In a series of studies, we developed semi-automated methods of discriminating emotion and para-linguistic communication in face and voice. In study 1, three computer-vision based modules reliably recognized FACS action units, which are the smallest visibly discriminable changes in facial expression. Automated Face Analysis demonstrated convergent validity with manual coding for 15 action units and action unit combinations central to the expression of emotion. In study 2, prosodic measures discriminated pragmatic intent in infant-directed speech with accuracy ranging from 61-65% in test samples. In study 3, facial EMG and prosodic measures combined discriminated between negative, neutral, and positive emotion with accuracy ranging from 47-79% in test samples. These results support the feasibility of human-computer interfaces that are sensitive to the full range of human nonverbal communication.

BibTeX

@workshop{Cohn-1998-14746,
author = {Jeffrey Cohn and G. S. Katz},
title = {Bimodal expression of emotion by face and voice},
booktitle = {Proceedings of MULTIMEDIA '98 Workshop on Face / Gesture Recognition and Their Applications},
year = {1998},
month = {September},
pages = {41 - 44},
}