Recognizing Action Units for Facial Expression Analysis

Ying-Li Tian, Takeo Kanade, and Jeffrey Cohn
tech. report CMU-RI-TR-99-40, Robotics Institute, Carnegie Mellon University, December, 1999

  • Adobe portable document format (pdf) (654KB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions (e.g. happiness and anger). Such prototypic expressions, however, occur infrequently. Human emotions and intentions are communicated more often by changes in one or two discrete facial features. We develop an automatic system to analyze subtle changes in facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal image sequence. Unlike most existing systems, our system attempts to recognize fine?rained changes in facial expression based on Facial Action Coding System (FACS) action units (AUs), instead of six basic expressions (e.g. happiness and anger). Multi?tate face and facial component models are proposed for tracking and modeling different facial features, including lips, eyes, brows, cheeks, and their related wrinkles and facial furrows. Then we convert the results of tracking to detailed parametric descriptions of the facial features. With these features as the inputs, 11 lower face action units (AUs) and 7 upper face AUs are recognized by a neural network algorithm. A recognition rate of 96.7% for lower face AUs and 95% for upper face AUs is obtained respectively. The recognition results indicate that our system can identify action units regardless of whether they occurred singly or in combinations.

Facial expression analysis, Action units, Neural network

Associated Center(s) / Consortia: Vision and Autonomous Systems Center
Associated Lab(s) / Group(s): Face Group
Associated Project(s): Face Databases

Text Reference
Ying-Li Tian, Takeo Kanade, and Jeffrey Cohn, "Recognizing Action Units for Facial Expression Analysis," tech. report CMU-RI-TR-99-40, Robotics Institute, Carnegie Mellon University, December, 1999

BibTeX Reference
   author = "Ying-Li Tian and Takeo Kanade and Jeffrey Cohn",
   title = "Recognizing Action Units for Facial Expression Analysis",
   booktitle = "",
   institution = "Robotics Institute",
   month = "December",
   year = "1999",
   number= "CMU-RI-TR-99-40",
   address= "Pittsburgh, PA",