Automated face analysis by feature point tracking has high concurrent validity with manual FACS coding

Jeffrey Cohn, Adena Zlochower, Jenn-Jier James Lien, and Takeo Kanade
Psychophysiology, Vol. 36, 1999, pp. 35 - 43.


Download
  • Adobe portable document format (pdf) (246KB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
The face is a rich source of information about human behavior. Available methods for coding facial displays, however, are human-observer dependent, labor intensive, and difficult to standardize. To enable rigorous and efficient quantitative measurement of facial displays, we have developed an automated method of facial display analysis. In this report we compare the results with those of manual FACS (Facial Action Coding System, Ekman & Friesen, 1978a) coding. One hundred university students were videotaped while performing a series of facial displays. The image sequences were coded from videotape by certified FACS coders. Fifteen action units and action unit combinations that occurred a minimum of 25 times were selected for automated analysis. Facial features were automatically tracked in digitized image sequences using a hierarchical algorithm for estimating optical flow. The measurements were normalized for variation in position, orientation, and scale. The image sequences were randomly divided into a training set and a cross-validation set, and discriminant function analyses were conducted on the feature point measurements. In the training set, average agreement with manual FACS coding was 92% or higher for action units in the brow, eye, and mouth regions. In the cross-validation set, average agreement was 91%, 88%, and 81% for action units in the brow, eye, and mouth regions, respectively. Automated Face Analysis by feature point tracking demonstrated high concurrent validity with manual FACS coding.

Notes
Associated Center(s) / Consortia: Vision and Autonomous Systems Center
Associated Lab(s) / Group(s): Face Group and Component Analysis
Associated Project(s): Facial Expression Analysis

Text Reference
Jeffrey Cohn, Adena Zlochower, Jenn-Jier James Lien, and Takeo Kanade, "Automated face analysis by feature point tracking has high concurrent validity with manual FACS coding," Psychophysiology, Vol. 36, 1999, pp. 35 - 43.

BibTeX Reference
@article{Cohn_1999_3108,
   author = "Jeffrey Cohn and Adena Zlochower and Jenn-Jier James Lien and Takeo Kanade",
   title = "Automated face analysis by feature point tracking has high concurrent validity with manual FACS coding",
   journal = "Psychophysiology",
   pages = "35 - 43",
   year = "1999",
   volume = "36",
}