Multimodal coordination of facial action, head rotation, and eye motion during spontaneous smiles

Jeffrey Cohn, L.I. Reed, Tsuyoshi Moriyama, Jing Xiao, Karen Schmidt, and Zara Ambadar
Proceedings of the Sixth IEEE International Conference on Automatic Face and Gesture Recognition (FG'04), 2004, pp. 129 - 138.


Download
  • Adobe portable document format (pdf) (205KB)
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Abstract
Both the configuration of facial features and the timing of facial actions are important to emotion and communication. Previous literature has focused on the former. We developed an automatic facial expression analysis system that quantifies the timing of facial actions as well as head and eye motion during spontaneous facial expression. To assess coherence among these modalities, we recorded and analyzed spontaneous smiles in 62 young women of varied ethnicity ranging in age from 18 to 35 years. Spontaneous smiles occurred following directed facial action tasks, a situation likely to elicit spontaneous smiles of embarrassment. Smiles (AU 12) were manually FACS coded by certified FACS coders. 3D head motion was recovered using a cylindrical head model; motion vectors for lip-corner displacement were measured using feature-point tracking; eye closure and horizontal and vertical eye motion (from which to infer direction of gaze or visual regard) were measured by a generative model fitting approach. The mean correlation within subjects between lip-corner displacement, head motion, and eye motion ranged from +/- 0.36 to 0.50, which suggests moderate coherence among these features. Lip-corner displacement and head pitch were negatively correlated, as predicted for smiles of embarrassment. These findings are consistent with recent research in psychology suggesting that facial actions are embedded within coordinated motor structures. They suggest that the direction of correlation among features may discriminate between facial actions with similar morphology but different communicative meaning, inform automatic facial expression recognition, and provide normative data for animating computer avatars.

Notes
Associated Center(s) / Consortia: Vision and Autonomous Systems Center
Associated Lab(s) / Group(s): Face Group and Component Analysis
Associated Project(s): Facial Expression Analysis
Number of pages: 7

Text Reference
Jeffrey Cohn, L.I. Reed, Tsuyoshi Moriyama, Jing Xiao, Karen Schmidt, and Zara Ambadar, "Multimodal coordination of facial action, head rotation, and eye motion during spontaneous smiles," Proceedings of the Sixth IEEE International Conference on Automatic Face and Gesture Recognition (FG'04), 2004, pp. 129 - 138.

BibTeX Reference
@inproceedings{Cohn_2004_4812,
   author = "Jeffrey Cohn and L.I. Reed and Tsuyoshi Moriyama and Jing Xiao and Karen Schmidt and Zara Ambadar",
   title = "Multimodal coordination of facial action, head rotation, and eye motion during spontaneous smiles",
   booktitle = "Proceedings of the Sixth IEEE International Conference on Automatic Face and Gesture Recognition (FG'04)",
   pages = "129 - 138",
   year = "2004",
}