Automated face coding: A computer-vision based method of facial expression analysis - Robotics Institute Carnegie Mellon University

Automated face coding: A computer-vision based method of facial expression analysis

Jeffrey Cohn, Adena Zlochower, Jenn-Jier James Lien, Y. T. Wu., and Takeo Kanade
Conference Paper, Proceedings of 7th European Conference on Facial Expression Measurement and Meaning, pp. 329 - 333, July, 1997

Abstract

The face is a rich source of information about human behavior. Facial expression displays emotion, regulates parent-infant interaction, reveals brain function, and signals developmental transitions. To make use of this information, reliable, valid, and efficient methods of measurement are critical. Current human-observer based methods vary in their specificity, comprehensiveness, objectivity, and utility. The Facial Action Coding System (FACS: Ekman & Friesen, 1978; Baby FACS: Oster & Rosenstein, 1992) is the most comprehensive method of coding facial expression since all visibly discriminable movements are coded with minimal subjective interpretation. With extensive training, FACS coders can achieve acceptable levels of inter-observer reliability. FACS is labor intensive, however, and coding criteria may drift over time. The time involved in implementing FACS discourages standardized measurement and results in the use of less specific coding systems. These problems promote the use of smaller samples, prolong study completion times, and impact the generulizability of study findings. To enable rigorous, efficient, and quantitative measurement of facial expression, we used computer vision techniques based on optical flow to develop and implement the first version of the Automated Face Coding (AFC) method (Cohn et al., 1997). Optical flow refers to changes in pixel intensity across an image sequence and provides a model of underlying muscle activation. Motion from the subtle texture changes associated with muscle contraction is extracted, and the pattern of movement is used to quantify expression. AFC was developed in image sequences from 100 university students who were videotaped while performing a series of facial expressions. The sequences were coded by certified FACS coders. In the digitized image sequences, facial features were automatically tracked using the Lucas-Kanade algorithm for estimating optical flow. Recognition accuracy for action units or AU combinations in the brow, eye, and mouth regions was 79% or better. AFC demonstrated high concurrent validity with human FACS coding. In the work to be presented, we applied algorithms developed in adults to 20 infants 3 to 6 months of age. Observations were obtained in 3 min face-to-face interactions with mothers and fathers. Only frontal images with minimal head rotation and occlusion were selected for automated analysis. Smiles, cry faces, pouts, brow raises, and brow furrows were analyzed. Although there are clearly differences in facial morphology between infants and adults, preliminary findings suggest that algorithms for tracking facial features and classifying expression produce comparable results in infants and adults when head rotation is within 10 degrees.

BibTeX

@conference{Cohn-1997-14427,
author = {Jeffrey Cohn and Adena Zlochower and Jenn-Jier James Lien and Y. T. Wu. and Takeo Kanade},
title = {Automated face coding: A computer-vision based method of facial expression analysis},
booktitle = {Proceedings of 7th European Conference on Facial Expression Measurement and Meaning},
year = {1997},
month = {July},
pages = {329 - 333},
}