Recognizing action units for facial expression analysis

Ying-Li Tian, Takeo Kanade, and Jeffrey Cohn
IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 23, No. 2, March, 2001, pp. 97 - 115.


Download
  • Adobe portable document format (pdf) (4MB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truthby different research teams.

Notes
Associated Center(s) / Consortia: Vision and Autonomous Systems Center
Associated Lab(s) / Group(s): Face Group and Component Analysis
Associated Project(s): Facial Expression Analysis

Text Reference
Ying-Li Tian, Takeo Kanade, and Jeffrey Cohn, "Recognizing action units for facial expression analysis," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 23, No. 2, March, 2001, pp. 97 - 115.

BibTeX Reference
@article{Tian_2001_3527,
   author = "Ying-Li Tian and Takeo Kanade and Jeffrey Cohn",
   title = "Recognizing action units for facial expression analysis",
   journal = "IEEE Transactions on Pattern Analysis and Machine Intelligence",
   pages = "97 - 115",
   month = "March",
   year = "2001",
   volume = "23",
   number = "2",
}